第二十二节,tensorflow中的图片分类模型库slim的使用(代码片段)

大奥特曼打小怪兽 大奥特曼打小怪兽     2022-11-21     374

关键词:

Google在TensorFlow1.0,之后推出了一个叫slim的库,TF-slim是TensorFlow的一个新的轻量级的高级API接口。这个模块是在16年新推出的,其主要目的是来做所谓的“代码瘦身”。它类似我们在TensorFlow模块中所介绍的tf.contrib.lyers模块,将很多常见的TensorFlow函数进行了二次封装,使得代码变得更加简洁,特别适用于构建复杂结构的深度神经网络,它可以用了定义、训练、和评估复杂的模型。

这里我们为什么要过来介绍这一节的内容呢?主要是因为TensorFlow的models模块里提供了大量用slim写好的网络模型结构代码,以及用该代码训练出来的模型检查点文件,可以作为我们预训练模型来使用。因此我们需要会使用slim库。

一 获取models中的slim模块代码

为了能够使用models中的代码,需要先验证下我们的TensorFlow版本是否集成了slim模块。接着从GitHub上将models代码下载下来:

1.验证slim库

在使用slim之前,要测试本地的tf.contrib.slim模块是否有效,在命令行中输入如下命令:

python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once"

如果没有任何错误,则表明TF-Slim是可以工作的。

2. 下载models模块

To use TF-Slim for image classification, you also have to install the TF-Slim image models library, which is not part of the core TF library. To do this, check out the tensorflow/models repository as follows:

cd $HOME/workspace
git clone https://github.com/tensorflow/models/

This will put the TF-Slim image models library in $HOME/workspace/models/research/slim. (It will also create a directory calledmodels/inception, which contains an older version of slim; you can safely ignore this.)

To verify that this has worked, execute the following commands; it should run without raising any errors.

cd $HOME/workspace/models/research/slim
python -c "from nets import cifarnet; mynet = cifarnet.cifarnet"

我使用的是window操作系统,我直接从https://github.com/tensorflow/models/网址下载了该模块:

 

 二  models中的slim目录结构

slim位于\\models-master\\research\\slim路径下,一共有5个文件夹:

  • datasets:处理数据集相关的代码。
  • deployment:部署。通过创建clone方式实现跨机器的分布训练,可以在多CPU和多GPU上实现运算的同步或者异步。
  • nets:该文件夹里存放着各种网络模型。
  • preprocessing:适用于各种网络的图片处理函数。
  • scripts:运行网络模型的一些案例脚本,这些脚本只能在支持shell的系统下使用。

在这里重点介绍datasets,nets,preprocessing三个文件夹。

1.datesets数据集处理模块

datasets里面存放着常用的图片训练数据集相关的代码。主要支持的数据集有cifar10、flowers、mnist、imagenet。

代码文件的名称和数据集相对应,可以使用这些代码下载或获取数据集中的数据。以imagenet为例,可以使用如下函数从网上获取imagenet标签。

    imagenet_map = imagenet.create_readable_names_for_imagenet_labels()

上面代码返回的是imagenet中1000个类的分类标签名字(与样本序列对应)。

2.nets模块

该文件夹下面包含各种网络模块:

每个网络模型文件都是以自己的名字命名的,而且里面的代码结构框架也大致相同,以inception_resnet_v2为例:

# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the definition of the Inception Resnet V2 architecture.

As described in http://arxiv.org/abs/1602.07261.

  Inception-v4, Inception-ResNet and the Impact of Residual Connections
    on Learning
  Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function


import tensorflow as tf

slim = tf.contrib.slim


def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
  """Builds the 35x35 resnet block."""
  with tf.variable_scope(scope, \'Block35\', [net], reuse=reuse):
    with tf.variable_scope(\'Branch_0\'):
      tower_conv = slim.conv2d(net, 32, 1, scope=\'Conv2d_1x1\')
    with tf.variable_scope(\'Branch_1\'):
      tower_conv1_0 = slim.conv2d(net, 32, 1, scope=\'Conv2d_0a_1x1\')
      tower_conv1_1 = slim.conv2d(tower_conv1_0, 32, 3, scope=\'Conv2d_0b_3x3\')
    with tf.variable_scope(\'Branch_2\'):
      tower_conv2_0 = slim.conv2d(net, 32, 1, scope=\'Conv2d_0a_1x1\')
      tower_conv2_1 = slim.conv2d(tower_conv2_0, 48, 3, scope=\'Conv2d_0b_3x3\')
      tower_conv2_2 = slim.conv2d(tower_conv2_1, 64, 3, scope=\'Conv2d_0c_3x3\')
    mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_1, tower_conv2_2])
    up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
                     activation_fn=None, scope=\'Conv2d_1x1\')
    scaled_up = up * scale
    if activation_fn == tf.nn.relu6:
      # Use clip_by_value to simulate bandpass activation.
      scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)

    net += scaled_up
    if activation_fn:
      net = activation_fn(net)
  return net


def block17(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
  """Builds the 17x17 resnet block."""
  with tf.variable_scope(scope, \'Block17\', [net], reuse=reuse):
    with tf.variable_scope(\'Branch_0\'):
      tower_conv = slim.conv2d(net, 192, 1, scope=\'Conv2d_1x1\')
    with tf.variable_scope(\'Branch_1\'):
      tower_conv1_0 = slim.conv2d(net, 128, 1, scope=\'Conv2d_0a_1x1\')
      tower_conv1_1 = slim.conv2d(tower_conv1_0, 160, [1, 7],
                                  scope=\'Conv2d_0b_1x7\')
      tower_conv1_2 = slim.conv2d(tower_conv1_1, 192, [7, 1],
                                  scope=\'Conv2d_0c_7x1\')
    mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2])
    up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
                     activation_fn=None, scope=\'Conv2d_1x1\')

    scaled_up = up * scale
    if activation_fn == tf.nn.relu6:
      # Use clip_by_value to simulate bandpass activation.
      scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)

    net += scaled_up
    if activation_fn:
      net = activation_fn(net)
  return net


def block8(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
  """Builds the 8x8 resnet block."""
  with tf.variable_scope(scope, \'Block8\', [net], reuse=reuse):
    with tf.variable_scope(\'Branch_0\'):
      tower_conv = slim.conv2d(net, 192, 1, scope=\'Conv2d_1x1\')
    with tf.variable_scope(\'Branch_1\'):
      tower_conv1_0 = slim.conv2d(net, 192, 1, scope=\'Conv2d_0a_1x1\')
      tower_conv1_1 = slim.conv2d(tower_conv1_0, 224, [1, 3],
                                  scope=\'Conv2d_0b_1x3\')
      tower_conv1_2 = slim.conv2d(tower_conv1_1, 256, [3, 1],
                                  scope=\'Conv2d_0c_3x1\')
    mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2])
    up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None,
                     activation_fn=None, scope=\'Conv2d_1x1\')

    scaled_up = up * scale
    if activation_fn == tf.nn.relu6:
      # Use clip_by_value to simulate bandpass activation.
      scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0)

    net += scaled_up
    if activation_fn:
      net = activation_fn(net)
  return net


def inception_resnet_v2_base(inputs,
                             final_endpoint=\'Conv2d_7b_1x1\',
                             output_stride=16,
                             align_feature_maps=False,
                             scope=None,
                             activation_fn=tf.nn.relu):
  """Inception model from  http://arxiv.org/abs/1602.07261.

  Constructs an Inception Resnet v2 network from inputs to the given final
  endpoint. This method can construct the network up to the final inception
  block Conv2d_7b_1x1.

  Args:
    inputs: a tensor of size [batch_size, height, width, channels].
    final_endpoint: specifies the endpoint to construct the network up to. It
      can be one of [\'Conv2d_1a_3x3\', \'Conv2d_2a_3x3\', \'Conv2d_2b_3x3\',
      \'MaxPool_3a_3x3\', \'Conv2d_3b_1x1\', \'Conv2d_4a_3x3\', \'MaxPool_5a_3x3\',
      \'Mixed_5b\', \'Mixed_6a\', \'PreAuxLogits\', \'Mixed_7a\', \'Conv2d_7b_1x1\']
    output_stride: A scalar that specifies the requested ratio of input to
      output spatial resolution. Only supports 8 and 16.
    align_feature_maps: When true, changes all the VALID paddings in the network
      to SAME padding so that the feature maps are aligned.
    scope: Optional variable_scope.
    activation_fn: Activation function for block scopes.

  Returns:
    tensor_out: output tensor corresponding to the final_endpoint.
    end_points: a set of activations for external use, for example summaries or
                losses.

  Raises:
    ValueError: if final_endpoint is not set to one of the predefined values,
      or if the output_stride is not 8 or 16, or if the output_stride is 8 and
      we request an end point after \'PreAuxLogits\'.
  """
  if output_stride != 8 and output_stride != 16:
    raise ValueError(\'output_stride must be 8 or 16.\')

  padding = \'SAME\' if align_feature_maps else \'VALID\'

  end_points = 

  def add_and_check_final(name, net):
    end_points[name] = net
    return name == final_endpoint

  with tf.variable_scope(scope, \'InceptionResnetV2\', [inputs]):
    with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
                        stride=1, padding=\'SAME\'):
      # 149 x 149 x 32
      net = slim.conv2d(inputs, 32, 3, stride=2, padding=padding,
                        scope=\'Conv2d_1a_3x3\')
      if add_and_check_final(\'Conv2d_1a_3x3\', net): return net, end_points

      # 147 x 147 x 32
      net = slim.conv2d(net, 32, 3, padding=padding,
                        scope=\'Conv2d_2a_3x3\')
      if add_and_check_final(\'Conv2d_2a_3x3\', net): return net, end_points
      # 147 x 147 x 64
      net = slim.conv2d(net, 64, 3, scope=\'Conv2d_2b_3x3\')
      if add_and_check_final(\'Conv2d_2b_3x3\', net): return net, end_points
      # 73 x 73 x 64
      net = slim.max_pool2d(net, 3, stride=2, padding=padding,
                            scope=\'MaxPool_3a_3x3\')
      if add_and_check_final(\'MaxPool_3a_3x3\', net): return net, end_points
      # 73 x 73 x 80
      net = slim.conv2d(net, 80, 1, padding=padding,
                        scope=\'Conv2d_3b_1x1\')
      if add_and_check_final(\'Conv2d_3b_1x1\', net): return net, end_points
      # 71 x 71 x 192
      net = slim.conv2d(net, 192, 3, padding=padding,
                        scope=\'Conv2d_4a_3x3\')
      if add_and_check_final(\'Conv2d_4a_3x3\', net): return net, end_points
      # 35 x 35 x 192
      net = slim.max_pool2d(net, 3, stride=2, padding=padding,
                            scope=\'MaxPool_5a_3x3\')
      if add_and_check_final(\'MaxPool_5a_3x3\', net): return net, end_points

      # 35 x 35 x 320
      with tf.variable_scope(\'Mixed_5b\'):
        with tf.variable_scope(\'Branch_0\'):
          tower_conv = slim.conv2d(net, 96, 1, scope=\'Conv2d_1x1\')
        with tf.variable_scope(\'Branch_1\'):
          tower_conv1_0 = slim.conv2d(net, 48, 1, scope=\'Conv2d_0a_1x1\')
          tower_conv1_1 = slim.conv2d(tower_conv1_0, 64, 5,
                                      scope=\'Conv2d_0b_5x5\')
        with tf.variable_scope(\'Branch_2\'):
          tower_conv2_0 = slim.conv2d(net, 64, 1, scope=\'Conv2d_0a_1x1\')
          tower_conv2_1 = slim.conv2d(tower_conv2_0, 96, 3,
                                      scope=\'Conv2d_0b_3x3\')
          tower_conv2_2 = slim.conv2d(tower_conv2_1, 96, 3,
                                      scope=\'Conv2d_0c_3x3\')
        with tf.variable_scope(\'Branch_3\'):
          tower_pool = slim.avg_pool2d(net, 3, stride=1, padding=\'SAME\',
                                       scope=\'AvgPool_0a_3x3\')
          tower_pool_1 = slim.conv2d(tower_pool, 64, 1,
                                     scope=\'Conv2d_0b_1x1\')
        net = tf.concat(
            [tower_conv, tower_conv1_1, tower_conv2_2, tower_pool_1], 3)

      if add_and_check_final(\'Mixed_5b\', net): return net, end_points
      # TODO(alemi): Register intermediate endpoints
      net = slim.repeat(net, 10, block35, scale=0.17,
                        activation_fn=activation_fn)

      # 17 x 17 x 1088 if output_stride == 8,
      # 33 x 33 x 1088 if output_stride == 16
      use_atrous = output_stride == 8

      with tf.variable_scope(\'Mixed_6a\'):
        with tf.variable_scope(\'Branch_0\'):
          tower_conv = slim.conv2d(net, 384, 3, stride=1 if use_atrous else 2,
                                   padding=padding,
                                   scope=\'Conv2d_1a_3x3\')
        with tf.variable_scope(\'Branch_1\'):
          tower_conv1_0 = slim.conv2d(net, 256, 1, scope=\'Conv2d_0a_1x1\')
          tower_conv1_1 = slim.conv2d(tower_conv1_0, 256, 3,
                                      scope=\'Conv2d_0b_3x3\')
          tower_conv1_2 = slim.conv2d(tower_conv1_1, 384, 3,
                                      stride=1 if use_atrous else 2,
                                      padding=padding,
                                      scope=\'Conv2d_1a_3x3\')
        with tf.variable_scope(\'Branch_2\'):
          tower_pool = slim.max_pool2d(net, 3, stride=1 if use_atrous else 2,
                                       padding=padding,
                                       scope=\'MaxPool_1a_3x3\')
        net = tf.concat([tower_conv, tower_conv1_2, tower_pool], 3)

      if add_and_check_final(\'Mixed_6a\', net): return net, end_points

      # TODO(alemi): register intermediate endpoints
      with slim.arg_scope([slim.conv2d], rate=2 if use_atrous else 1):
        net = slim.repeat(net, 20, block17, scale=0.10,
                          activation_fn=activation_fn)
      if add_and_check_final(\'PreAuxLogits\', net): return net, end_points

      if output_stride == 8:
        # TODO(gpapan): Properly support output_stride for the rest of the net.
        raise ValueError(\'output_stride==8 is only supported up to the \'
                         \'PreAuxlogits end_point for now.\')

      # 8 x 8 x 2080
      with tf.variable_scope(\'Mixed_7a\'):
        with tf.variable_scope(\'Branch_0\'):
          tower_conv = slim.conv2d(net, 256, 1, scope=\'Conv2d_0a_1x1\')
          tower_conv_1 = slim.conv2d(tower_conv, 384, 3, stride=2,
                                     padding=padding,
                                     scope=\'Conv2d_1a_3x3\')
        with tf.variable_scope(\'Branch_1\'):
          tower_conv1 = slim.conv2d(net, 256, 1, scope=\'Conv2d_0a_1x1\')
          tower_conv1_1 = slim.conv2d(tower_conv1, 288, 3, stride=2,
                                      padding=padding,
                                      scope=\'Conv2d_1a_3x3\')
        with tf.variable_scope(\'Branch_2\'):
          tower_conv2 = slim.conv2d(net, 256, 1, scope=\'Conv2d_0a_1x1\')
          tower_conv2_1 = slim.conv2d(tower_conv2, 288, 3,
                                      scope=\'Conv2d_0b_3x3\')
          tower_conv2_2 = slim.conv2d(tower_conv2_1, 320, 3, stride=2,
                                      padding=padding,
                                      scope=\'Conv2d_1a_3x3\')
        with tf.variable_scope(\'Branch_3\'):
          tower_pool = slim.max_pool2d(net, 3, stride=2,
                                       padding=padding,
                                       scope=\'MaxPool_1a_3x3\')
        net = tf.concat(
            [tower_conv_1, tower_conv1_1, tower_conv2_2, tower_pool], 3)

      if add_and_check_final(\'Mixed_7a\', net): return net, end_points

      # TODO(alemi): register intermediate endpoints
      net = slim.repeat(net, 9, block8, scale=0.20, activation_fn=activation_fn)
      net = block8(net, activation_fn=None)

      # 8 x 8 x 1536
      net = slim.conv2d(net, 1536, 1, scope=\'Conv2d_7b_1x1\')
      if add_and_check_final(\'Conv2d_7b_1x1\', net): return net, end_points

    raise ValueError(\'final_endpoint (%s) not recognized\', final_endpoint)


def inception_resnet_v2(inputs, num_classes=1001, is_training=True,
                        dropout_keep_prob=0.8,
                        reuse=None,
                        scope=\'InceptionResnetV2\',
                        create_aux_logits=True,
                        activation_fn=tf.nn.relu):
  """Creates the Inception Resnet V2 model.

  Args:
    inputs: a 4-D tensor of size [batch_size, height, width, 3].
      Dimension batch_size may be undefined. If create_aux_logits is false,
      also height and width may be undefined.
    num_classes: number of predicted classes. If 0 or None, the logits layer
      is omitted and the input features to the logits layer (before  dropout)
      are returned instead.
    is_training: whether is training or not.
    dropout_keep_prob: float, the fraction to keep before final layer.
    reuse: whether or not the network and its variables should be reused. To be
      able to reuse \'scope\' must be given.
    scope: Optional variable_scope.
    create_aux_logits: Whether to include the auxilliary logits.
    activation_fn: Activation function for conv2d.

  Returns:
    net: the output of the logits layer (if num_classes is a non-zero integer),
      or the non-dropped-out input to the logits layer (if num_classes is 0 or
      None).
    end_points: the set of end_points from the inception model.
  """
  end_points = 

  with tf.variable_scope(scope, \'InceptionResnetV2\', [inputs],
                         reuse=reuse) as scope:
    with slim.arg_scope([slim.batch_norm, slim.dropout],
                        is_training=is_training):

      net, end_points = inception_resnet_v2_base(inputs, scope=scope,
                                                 activation_fn=activation_fn)

      if create_aux_logits and num_classes:
        with tf.variable_scope(\'AuxLogits\'):
          aux = end_points[\'PreAuxLogits\']
          aux = slim.avg_pool2d(aux, 5, stride=3, padding=\'VALID\',
                                scope=\'Conv2d_1a_3x3\')
          aux = slim.conv2d(aux, 128, 1, scope=\'Conv2d_1b_1x1\')
          aux = slim.conv2d(aux, 768, aux.get_shape()[1:3],
                            padding=\'VALID\', scope=\'Conv2d_2a_5x5\')
          aux = slim.flatten(aux)
          aux = slim.fully_connected(aux, num_classes, activation_fn=None,
                                     scope=\'Logits\')
          end_points[\'AuxLogits\'] = aux

      with tf.variable_scope(\'Logits\'):
        # TODO(sguada,arnoegw): Consider adding a parameter global_pool which
        # can be set to False to disable pooling here (as in resnet_*()).
        kernel_size = net.get_shape()[1:3]
        if kernel_size.is_fully_defined():
          net = slim.avg_pool2d(net, kernel_size, padding=\'VALID\',
                                scope=\'AvgPool_1a_8x8\')
        else:
          net = tf.reduce_mean(net, [1, 2], keep_dims=True, name=\'global_pool\')
        end_points[\'global_pool\'] = net
        if not num_classes:
          return net, end_points
        net = slim.flatten(net)
        net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
                           scope=\'Dropout\')
        end_points[\'PreLogitsFlatten\'] = net
        logits = slim.fully_connected(net, num_classes, activation_fn=None,
                                      scope=\'Logits\')
        end_points[\'Logits\'] = logits
        end_points[\'Predictions\'] = tf.nn.softmax(logits, name=\'Predictions\')

    return logits, end_points
inception_resnet_v2.default_image_size = 299


def inception_resnet_v2_arg_scope(weight_decay=0.00004,
                                  batch_norm_decay=0.9997,
                                  batch_norm_epsilon=0.001,
                                  activation_fn=tf.nn.relu):
  """Returns the scope with the default parameters for inception_resnet_v2.

  Args:
    weight_decay: the weight decay for weights variables.
    batch_norm_decay: decay for the moving average of batch_norm momentums.
    batch_norm_epsilon: small float added to variance to avoid dividing by zero.
    activation_fn: Activation function for conv2d.

  Returns:
    a arg_scope with the parameters needed for inception_resnet_v2.
  """
  # Set weight_decay for weights in conv2d and fully_connected layers.
  with slim.arg_scope([slim.conv2d, slim.fully_connected],
                      weights_regularizer=slim.l2_regularizer(weight_decay),
                      biases_regularizer=slim.l2_regularizer(weight_decay)):

    batch_norm_params = 
        \'decay\': batch_norm_decay,
        \'epsilon\': batch_norm_epsilon,
        \'fused\': None,  # Use fused batch norm if possible.
    
    # Set activation_fn and parameters for batch_norm.
    with slim.arg_scope([slim.conv2d], activation_fn=activation_fn,
                        normalizer_fn=slim.batch_norm,
                        normalizer_params=batch_norm_params) as scope:
      return scope
View Code

该网络的框架接口如下:

  • inception_resnet_v2.default_image_size:默认图片的大小
  • inception_resnet_v2_base:为inception_resnet_v2的基础结构实现函数,输出inception_resnet_v2网络中最原始的数据,默认是传到inception_resnet_v2函数中,一般不会改变其内部。当要使用自定义的输出层时,会将传入自己的函数来替代inception_resnet_v2函数。
  • inception_resnet_v2:inception_resnet_v2网络的实现函数,这个函数有两个输出,一个是预测结果logits,另一个是辅助信息AuxLogits。辅助信息是为了显示或分析使用,主要包括summaries和losses。
  • inception_resnet_v2_arg_scope:该函数返回命名空间的名字。在外层修改或者使用模型时,可以使用与模型相同的命名空间。

3.preprocessing模块

 该模块代码包含几个图片预处理文件,命名也是按照模型的名字来命名的。slim会把某一类模型常用的预处理函数放到一个文件里,并命名该类模型相关的名字,而且每个代码文件函数结构也大致相似。例如调用inception_preprocessing函数中的代码如下:

inception_preprocessing.preprocess_image

该函数是将传入的图片转换成模型尺寸并归一化处理。

 

三 slim中的数据集处理

1.准备数据集

As part of this library, we\'ve included scripts to download several popular image datasets (listed below) and convert them to slim format.

2 下载数据集并转换成TFRecord格式

TFRecord是TensorFlow推荐的数据集格式,与TensorFlow框架结合紧密。在TensorFlow中提供了一系列接口可以访问TFRecord格式,该结构存在的意义主要是为了满足在处理海量样本集时,需要边执行训练边从硬盘上读取数据的需求。将原始文件转换成TFRecord的格式,然后在运行中通过多线程的方式来读取,这样可以减少主线程训练的负担,使得训练过程变得更高效。关于TFRecord格式详情可以参考文章

第十二节,TensorFlow读取数据的几种方法以及队列的使用

For each dataset, we\'ll need to download the raw data and convert it to TensorFlow\'s native TFRecord format. Each TFRecord contains a TF-Example protocol buffer. Below we demonstrate how to do this for the Flowers dataset.

$ DATA_DIR=/tmp/data/flowers
$ python download_and_convert_data.py \\
    --dataset_name=flowers \\
    --dataset_dir="$DATA_DIR"

这里有两个关键点:一个是数据集(例子中的flowers),另一个是下载路径(这里是存放在/tmp/data/flowers下的)

When the script finishes you will find several TFRecord files created:

These represent the training and validation data, sharded over 5 files each. You will also find the $DATA_DIR/labels.txt file which contains the mapping from integer labels to class names.

You can use the same script to create the mnist and cifar10 datasets. However, for ImageNet, you have to follow the instructionshere. Note that you first have to sign up for an account at image-net.org. Also, the download can take several hours, and could use up to 500GB.

在这里我详细介绍一下执行的代码,我们打开download_and_convert_data.py 文件,代码内容如下:

# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicab

第二十二节——反射(代码片段)

类加载器一、类加载器ClassLoader中的两个方法staticClassLoadergetSystemClassLoader():返回用于委派的系统类加载器ClassLoadergetParent():返回父类加载器进行委派例子publicclassClassLoaderDemo publicstaticvoidmain(String[]args) //sta 查看详情

《pyinstaller打包实战指南》第二十二节单文件模式打包playwright(代码片段)

第二十二节单文件模式打包Playwright打包时解决掉的问题:ImportError:DLLloadfailedwhileimporting_greenlet:动态链接库(DLL)初始化例程失败。Executabledoesn\'texistat C:\\Users\\user\\Desktop\\la_vie\\dist\\belle\\playwright\\driver\\pac 查看详情

第二十二节,三元运算

...变量条件成立,就将第一个值赋给变量,条件不成立就将第二个值赋给变量就是一个条件判断,和两个不同的值组合成的判断运算,就是三元运算#!/usr/bin/envpython#-*-coding:utf-8-*-#三元运算#三元运算,就是if条件判断,前后各有一个... 查看详情

easyclickhtmlui第二十二节jquery事件代理(代码片段)

EasyClickHtmlUI第二十二节jQuery事件代理事件代理介绍事件代理就是利用事件冒泡的原理(事件冒泡就是事件会向它的父级一级一级传递),把事件加到父级上,通过判断事件来源,执行相应的子元素的操作,事件代理首先可... 查看详情

scala入门到精通——第二十二节高级类型

作者:摇摆少年梦视频地址:http://www.xuetuwuyou.com/course/12本节主要内容this.type使用类型投影结构类型复合类型1.this.type使用classPerson{privatevarname:String=nullprivatevarage:Int=0defsetName(name:String)={this.name=name//返回对象本身this}d 查看详情

学习笔记第二十二节课

shell介绍shell是一个命令解释器,提供用户和机器之间的交互。用户配置的最后一个段就是shell创建的普通的用户它的shell是binbash每个用户都可以有自己的shellsentos7的shell是bash除此之外还有zshksh,用起来和bash很像,但是有细节上... 查看详情

《pyinstaller打包实战指南》第二十二节单文件模式打包playwright(代码片段)

第二十二节单文件模式打包Playwright打包时解决掉的问题:ImportError:DLLloadfailedwhileimporting_greenlet:动态链接库(DLL)初始化例程失败。Executabledoesn\'texistat C:\\Users\\user\\Desktop\\la_vie\\dist\\belle\\playwright\\driver\\package\\.local-browsers\\chromiu... 查看详情

机器学习100天(二十二):022分类模型评价指标-python实现(代码片段)

机器学习100天!今天讲的是:分类模型评价指标-Python实现!《机器学习100天》完整目录:目录打开spyder,首先,导入标准库。importnumpyasnpimportmatplotlib.pyplotaspltfromsklearn.metricsimportaccuracy_scorefromsklearn.metricsimportprecision_scorefromsklear 查看详情

机器学习100天(二十二):022分类模型评价指标-python实现(代码片段)

机器学习100天!今天讲的是:分类模型评价指标-Python实现!《机器学习100天》完整目录:目录打开spyder,首先,导入标准库。importnumpyasnpimportmatplotlib.pyplotaspltfromsklearn.metricsimportaccuracy_scorefromsklearn.metricsimportprecision_scorefromsklear 查看详情

第二十二节with标签使用详解(代码片段)

...用-->11<p>a</p>12%endwith%13<br>14<br>15<!--第二种写法-->16%withbooks.1asb%17<!--等号两边不能有空格,定义的变量只能在with块中使 查看详情

第二十二节-表格

table是块级元素,给tablewidth height,单元格会自动分配,不给tablewidth与height,单元格会根据内容分配,但还是会对齐table:thead表格头(th标题单元格),th里的字上下左右居中     tbody表身     tfoot... 查看详情

零基础学python后端开发篇第二十二节--pythonweb开发:http请求的url路由(代码片段)

一、要实现的目标我们的目标是实现一个公司的销售管理系统。这个在以后的课程中,我会一步步带领大家去完成这个销售管理系统。二、创建项目app1.APP介绍Django中的一个app就是项目里面的一个应用的意思。一个项目包含多个a... 查看详情

第二十二节jquery事件(代码片段)

1<!--事件函数列表:2blur()元素失去焦点3focus()元素获得焦点4click()鼠标单击5mouseover()鼠标进去(进入子元素也触发)6mouseout()鼠标离开(离开子元素也触发)7mouseenter()鼠标进入(进入子元素不触发)8mouseleave()鼠标离开(离开子... 查看详情

第二节,tensorflow使用前馈神经网络实现手写数字识别(代码片段)

一感知器    感知器学习笔记:https://blog.csdn.net/liyuanbhu/article/details/51622695    感知器(Perceptron)是二分类的线性分类模型,其输入为实例的特征向量,输出为实例的类别,取+1和-1。这种算法的局限... 查看详情

第二百六十二节,tornado框架-cookie

Tornado框架-cookieCookie是网站用来在客户端保存识别用户的一种小文件。一般来用库可以保存用户登录信息、购物数据信息等一系列微小信息。self.set_cookie()方法,创建cookie必写参数,cookie名称和cookie值,后面有可选参数self.get_cooki... 查看详情

kaggle竞赛使用tpu对104种花朵进行分类第二十一次尝试99.9%准确率中文注释深度学习tpu+keras+tensorflow+efficientnetb7

目录排行榜分数最终排名比赛过后的一点心得前言版本更新情况源代码1.安装efficientnet2.导入需要的包3.检测TPU和GPU4.配置TPU、访问路径等5.各种函数5.1.可视化函数5.2.数据集函数5.3.模型函数6.数据集可视化7.训练模型7.1.创建模型并... 查看详情

“全栈2019”java第二十二章:控制流程语句中的决策语句if-else

...开发环境JDKv11IntelliJIDEAv2018.3文章原文链接“全栈2019”Java第二十二章:控制流程语句中的决策语句if-else下一章“全栈2019”Java第二十三章:流程控制语句中决策语句switch上篇学习小组加入同步学习小组,共同交流与进步。方式一... 查看详情

我的第二十二篇博客---vue

Vue.js基本概念:首先通过将vue.js作为一个js库来使用,来学习vue的一些基本概念,我们下载了vue.js后,需要在页面上通过script标签引入vue.js。开发中可以使用开发版本vue.js。产品上线要换成vue.min.js。<scripttype="text/javascript"src="..... 查看详情