『tensorflow』徒手装高达_初号机_添加训练模组并整合为可用分类网络

author author     2022-09-02     277

关键词:

摘要:

本次整合了前面两节的模组,并添加向前传播&反馈训练部分,使之成为一个包含训练&验证&测试的分类网络。

文件架构:

技术分享

代码整合:

image_info.py,图片读取部分

  1 import glob
  2 import os.path
  3 import random
  4 import numpy as np
  5 import tensorflow as tf
  6 
  7 def creat_image_lists(validation_percentage,testing_percentage,INPUT_DATA):
  8     ‘‘‘
  9     将图片(无路径文件名)信息保存在字典中
 10     :param validation_percentage: 验证数据百分比 
 11     :param testing_percentage:    测试数据百分比
 12     :param INPUT_DATA:            最外层数据路径(到达类目录上层)
 13     :return:                      字典{标签:{文件夹:str,训练:[],验证:[],测试:[]},...}
 14     ‘‘‘
 15     result = {}
 16     sub_dirs = [x[0] for x in os.walk(INPUT_DATA)]
 17     # 由于os.walk()列表第一个是‘./‘,所以排除
 18     is_root_dir = True            #<-----
 19     # 遍历各个label文件夹
 20     for sub_dir in sub_dirs:
 21         if is_root_dir:           #<-----
 22             is_root_dir = False
 23             continue
 24 
 25         extensions = [jpg, jpeg, JPG, JPEG]
 26         file_list  = []
 27         dir_name   = os.path.basename(sub_dir)
 28         # 遍历各个可能的文件尾缀
 29         for extension in extensions:
 30             # file_glob = os.path.join(INPUT_DATA,dir_name,‘*.‘+extension)
 31             file_glob = os.path.join(sub_dir, *. + extension)
 32             file_list.extend(glob.glob(file_glob))      # 匹配并收集路径&文件名
 33             # print(file_glob,‘\n‘,glob.glob(file_glob))
 34         if not file_list: continue
 35 
 36         label_name = dir_name.lower()                   # 生成label,实际就是小写文件夹名
 37 
 38         # 初始化各个路径&文件收集list
 39         training_images   = []
 40         testing_images    = []
 41         validation_images = []
 42 
 43         # 去路径,只保留文件名
 44         for file_name in file_list:
 45             base_name = os.path.basename(file_name)
 46 
 47             # 随机划分数据给验证和测试
 48             chance = np.random.randint(100)
 49             if chance < validation_percentage:
 50                 validation_images.append(base_name)
 51             elif chance < (validation_percentage + testing_percentage):
 52                 testing_images.append(base_name)
 53             else:
 54                 training_images.append(base_name)
 55         # 本标签字典项生成
 56         result[label_name] = {
 57             dir        : dir_name,
 58             training   : training_images,
 59             testing    : testing_images,
 60             validation : validation_images
 61         }
 62     return result
 63 
 64 def get_image_path(image_lists, image_dir, label_name, index, category):
 65     ‘‘‘
 66     获取单张图片的存储地址
 67     :param image_lists: 全图片字典
 68     :param image_dir:   外层文件夹(内部是标签文件夹)
 69     :param label_name:  标签名
 70     :param index:       随机数索引
 71     :param category:    training or validation
 72     :return:            图片中间变量地址
 73     ‘‘‘
 74     label_lists   = image_lists[label_name]
 75     category_list = label_lists[category]       # 获取目标category图片列表
 76     mod_index     = index % len(category_list)  # 随机获取一张图片的索引
 77     base_name     = category_list[mod_index]    # 通过索引获取图片名
 78     return os.path.join(image_dir,label_lists[dir],base_name)
 79 
 80 
 81 def get_random_cached_inputs(sess,n_class,image_lists,batch,category,INPUT_DATA,shape=[299,299]):
 82     ‘‘‘
 83     函数随机获取一个batch的图片作为训练数据,并调整大小至输入层大小
 84     调用get_image_path
 85     :param sess:        会话句柄
 86     :param n_class:     分类数目
 87     :param image_lists: 图片字典
 88     :param batch:       batch大小
 89     :param category:    training or validation
 90     :param INPUT_DATA:  最外层图片文件夹
 91     :param shape:       输入层size
 92     :return:            image & label数组
 93     ‘‘‘
 94     images = []
 95     labels = []
 96     for i in range(batch):
 97         label_index = random.randrange(n_class)              # 标签索引随机生成
 98         label_name  = list(image_lists.keys())[label_index]  # 标签名获取
 99         image_index = random.randrange(65536)                # 标签内图片索引随机种子
100 
101         image_path = get_image_path(image_lists, INPUT_DATA, label_name, image_index, category)
102         image_raw  = tf.gfile.FastGFile(image_path, rb).read()
103         image_data = sess.run(tf.image.resize_images(tf.image.decode_jpeg(image_raw),
104                                                      shape,method=random.randint(0,3)))
105 
106         ground_truth = np.zeros(n_class,dtype=np.float32)
107         ground_truth[label_index] = 1.0                      # 标准结果[0,0,1,0...]
108         # 收集瓶颈张量和label
109         images.append(image_data)
110         labels.append(ground_truth)
111     return images,labels
112 
113 
114 def get_test_images(sess,image_lists,n_class,shape=[299,299]):
115     ‘‘‘
116     获取所有test数据
117     调用get_image_path
118     :param sess:         会话句柄 
119     :param image_lists:  图片字典
120     :param n_class:      分类数目 
121     :param shape:        输入层size
122     :return: 
123     ‘‘‘
124     test_images  = []
125     ground_truths = []
126     label_name_list = list(image_lists.keys())
127     for label_index,label_name in enumerate(image_lists[label_name_list]):
128         category = testing
129         for image_index, unused_base_name in enumerate(image_lists[label_name][category]): # 索引, {文件名}
130 
131             image_path = get_image_path(image_lists, INPUT_DATA, label_name, image_index, category)
132             image_raw = tf.gfile.FastGFile(image_path, rb).read()
133             image_data = sess.run(tf.image.resize_images(tf.image.decode_jpeg(image_raw),
134                                                          shape, method=random.randint(0, 3)))
135 
136             ground_truth = np.zeros(n_class, dtype=np.float32)
137             ground_truth[label_index] = 1.0
138             test_images.append(image_data)
139             ground_truths.append(ground_truth)
140     return test_images, ground_truths
141 
142 
143 if __name__==__main__:
144 
145     INPUT_DATA = ./flower_photos  # 数据文件夹
146     VALIDATION_PERCENTAGE = 10  # 验证用数据百分比
147     TEST_PERCENTAGE = 10  # 测试用数据百分比
148     BATCH_SIZE = 128  # 数据包大小
149     INPUT_SIZE = [224, 224]  # 标准输入尺寸
150 
151     image_dict = creat_image_lists(VALIDATION_PERCENTAGE,TEST_PERCENTAGE,INPUT_DATA)
152     n_class = len(image_dict.keys())
153     sess  = tf.Session()
154     category = training
155     image_list, label_list = get_random_cached_inputs(
156         sess, n_class, image_dict, BATCH_SIZE, category,INPUT_DATA,INPUT_SIZE)
157     # exit()

 

AlexNet.py,网络主干

优化方式:

  1. 全连接层Weights进行了dropout
  2. 全连接层Weights进行了l2正则化

由于l2正则化结果收集进‘losses‘中,所以主函数计算loss时应该注意

更新:

由于training和validation一次传入1batch的数据量,而test一次传入所有数据,所以修改了fc1层代码,使网络不再需要固定长度的数据量。

  1 import tensorflow as tf
  2 
  3 ‘‘‘
  4 AlwxNet网络结构主体,会接受batch图片返回数组label,
  5 且会在全连接Weights上l2正则化并收集进‘losses‘集合内部
  6 ‘‘‘
  7 
  8 def print_activations(t):
  9     """
 10     打印卷积层&池化层的尺寸
 11     :param t:  tensor
 12     :return:
 13     """
 14     print(t.op.name,  , t.get_shape().as_list())
 15 
 16 def inference(images,regularizer,N_CLASS=10,train=True):
 17     ‘‘‘
 18     AlexNex网络主体
 19     :param images:      图片batch,4-D:array
 20     :param regularizer: 正则化函数选择,eg:regularizer = tf.contrib.layers.l2_regularizer(0.0001)
 21     :param N_CLASS:     分类数目
 22     :param train:       True or False,是否是在训练,引入FC层的dropout
 23     :return:            logits,01型list;parameters,全训练参数(历史遗留问题,一般用不到)
 24     ‘‘‘
 25 
 26     parameters = []
 27 
 28     # conv1
 29     with tf.name_scope(conv1) as scope:
 30         kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype=tf.float32,
 31                                              stddev=1e-1), name=Weights)
 32         conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding=SAME)
 33         biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
 34                             trainable=True, name=biases)
 35         bias = tf.nn.bias_add(conv, biases)
 36         conv1 = tf.nn.relu(bias, name=scope)
 37         print_activations(conv1)
 38         parameters += [kernel, biases]
 39 
 40     # pool1
 41     pool1 = tf.nn.max_pool(conv1,
 42                             ksize=[1, 3, 3, 1],
 43                             strides=[1, 2, 2, 1],
 44                             padding=VALID,
 45                             name=pool1)
 46     print_activations(pool1)
 47 
 48     # conv2
 49     with tf.name_scope(conv2) as scope:
 50         kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype=tf.float32,
 51                                                 stddev=1e-1), name=Weights)
 52         conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding=SAME)
 53         biases = tf.Variable(tf.constant(0.0, shape=[192], dtype=tf.float32),
 54                             trainable=True, name=biases)
 55         bias = tf.nn.bias_add(conv, biases)
 56         conv2 = tf.nn.relu(bias, name=scope)
 57         parameters += [kernel, biases]
 58     print_activations(conv2)
 59 
 60     # pool2
 61     pool2 = tf.nn.max_pool(conv2,
 62                             ksize=[1, 3, 3, 1],
 63                             strides=[1, 2, 2, 1],
 64                             padding=VALID,
 65                             name=pool2)
 66     print_activations(pool2)
 67 
 68     # conv3
 69     with tf.name_scope(conv3) as scope:
 70         kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384],
 71                                                 dtype=tf.float32,
 72                                                 stddev=1e-1), name=Weights)
 73         conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding=SAME)
 74         biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),
 75                             trainable=True, name=biases)
 76         bias = tf.nn.bias_add(conv, biases)
 77         conv3 = tf.nn.relu(bias, name=scope)
 78         parameters += [kernel, biases]
 79     print_activations(conv3)
 80 
 81     # conv4
 82     with tf.name_scope(conv4) as scope:
 83         kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256],
 84                                                 dtype=tf.float32,
 85                                                 stddev=1e-1), name=Weights)
 86         conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding=SAME)
 87         biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
 88                             trainable=True, name=biases)
 89         bias = tf.nn.bias_add(conv, biases)
 90         conv4 = tf.nn.relu(bias, name=scope)
 91         parameters += [kernel, biases]
 92     print_activations(conv4)
 93 
 94     # conv5
 95     with tf.name_scope(conv5) as scope:
 96         kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256],
 97                                                  dtype=tf.float32,
 98                                                  stddev=1e-1), name=Weights)
 99         conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding=SAME)
100         biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
101                              trainable=True, name=biases)
102         bias = tf.nn.bias_add(conv, biases)
103         conv5 = tf.nn.relu(bias, name=scope)
104         parameters += [kernel, biases]
105     print_activations(conv5)
106 
107     # pool5
108     pool5 = tf.nn.max_pool(conv5,
109                             ksize=[1, 3, 3, 1],
110                             strides=[1, 2, 2, 1],
111                             padding=VALID,
112                             name=pool5)
113     print_activations(pool5)
114 
115     # fc1
116     with tf.name_scope(fc1) as scope:
117         reshape = tf.reshape(pool5,[images.get_shape().as_list()[0],-1]) # [BATCH_SIZE,-1] <-------
118         dim = reshape.get_shape()[1].value
119         Weight = tf.Variable(tf.truncated_normal([dim,4096], stddev=0.01),name=Weights)
120         bias = tf.Variable(tf.constant(0.1,shape=[4096]),name=biases)
121         fc1 = tf.nn.relu(tf.matmul(reshape,Weight) + bias,name=scope)
122         tf.add_to_collection(losses, regularizer(Weight))
123         if train : fc1 = tf.nn.dropout(fc1,0.5)
124         parameters += [Weight, biases]
125     print_activations(fc1)
126 
127     # fc2
128     with tf.name_scope(fc2) as scope:
129         Weight = tf.Variable(tf.truncated_normal([4096, 4096], stddev=0.01),name=Weights)
130         bias = tf.Variable(tf.constant(0.1, shape=[4096]),name=biases)
131         fc2 = tf.nn.relu(tf.matmul(fc1, Weight) + bias,name=scope)
132         tf.add_to_collection(losses, regularizer(Weight))
133         if train: fc2 = tf.nn.dropout(fc2, 0.5)
134         parameters += [Weight, biases]
135     print_activations(fc2)
136 
137     # fc3
138     with tf.name_scope(fc3) as scope:
139         Weight = tf.Variable(tf.truncated_normal([4096, N_CLASS], stddev=0.01),name=Weights)
140         bias = tf.Variable(tf.constant(0.1, shape=[N_CLASS]),name=biases)
141         logits = tf.add(tf.matmul(fc2, Weight),bias,name=scope)
142         tf.add_to_collection(losses, regularizer(Weight))
143         parameters += [Weight, biases]
144     print_activations(logits)
145 
146     return logits, parameters

 

 

AlexNet_train.py,训练部分

接收了L2正则化优化

数据有测试&验证两个随机抽取图片过程和一次性全部加载的test过程

 1 import AlexNet as Net
 2 import tensorflow as tf
 3 import image_info as img_io
 4 
 5 
 6 ‘‘‘训练参数‘‘‘
 7 
 8 INPUT_DATA = ./flower_photos    # 数据文件夹
 9 VALIDATION_PERCENTAGE = 10        # 验证用数据百分比
10 TEST_PERCENTAGE = 10              # 测试用数据百分比
11 BATCH_SIZE = 128                  # 数据包大小
12 INPUT_SIZE = [224, 224]           # 标准输入尺寸
13 MAX_STEP   = 5000                 # 最大迭代轮数
14 
15 def loss(logits, labels):
16     """
17     Add L2Loss to all the trainable variables.
18     Args:
19         logits: Logits from inference().
20         labels: Labels from distorted_inputs or inputs(). 1-D tensor
21                 of shape [batch_size]
22     Returns:
23         Loss tensor of type float.
24     """
25     cross_entropy_mean = tf.reduce_mean(
26         tf.nn.sparse_softmax_cross_entropy_with_logits(
27             labels=tf.argmax(labels,1), logits=logits), name=cross_entropy)
28     tf.add_to_collection(losses, cross_entropy_mean)
29     return tf.add_n(tf.get_collection(losses), name=total_loss)
30 
31 if __name__==__main__:
32 
33     image_dict = img_io.creat_image_lists(VALIDATION_PERCENTAGE, TEST_PERCENTAGE, INPUT_DATA)
34     n_class = len(image_dict.keys())
35 
36     image_holder = tf.placeholder(tf.float32,[BATCH_SIZE, INPUT_SIZE[0], INPUT_SIZE[1], 3])
37     label_holder = tf.placeholder(tf.float32,[BATCH_SIZE, n_class])
38 
39     # 正则化函数选择并向前传播
40     regularizer = tf.contrib.layers.l2_regularizer(0.0001)
41     logits, _ = Net.inference(image_holder, regularizer, n_class)
42 
43     # loss计算并反向传播
44     loss = loss(logits, label_holder)
45     train_step = tf.train.GradientDescentOptimizer(1e-3).minimize(loss)
46 
47     # 正确率计算
48     with tf.name_scope(evaluation):
49         correct_prediction = tf.equal(tf.argmax(logits,1),tf.argmax(label_holder,1))
50         evaluation_step    = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
51 
52 
53     with tf.Session() as sess:
54 
55         # 确保在变量生成在最后
56         init = tf.global_variables_initializer().run()
57 
58         for step in range(MAX_STEP):
59             print(step)
60             # 训练
61             image_batch, label_batch = img_io.get_random_cached_inputs(
62                 sess, n_class, image_dict, BATCH_SIZE, training, INPUT_DATA, INPUT_SIZE)
63             sess.run(train_step,feed_dict={
64                 image_holder:image_batch,label_holder:label_batch})
65 
66             # 验证
67             if (step % 100 == 0) or (step + 1 == MAX_STEP):
68                 image_batch_val, label_batch_val = img_io.get_random_cached_inputs(
69                     sess, n_class, image_dict, BATCH_SIZE, validation, INPUT_DATA, INPUT_SIZE)
70                 validation_accuracy = sess.run(evaluation_step, feed_dict={
71                     image_holder:image_batch_val,label_holder:label_batch_val})
72                 print(Step %d: Validation accuracy on random sampled %d examples = %.1f%% %
73                       (step, BATCH_SIZE, validation_accuracy * 100))
74 
75         # 测试
76         image_batch_test, label_batch_test = 77             img_io.get_test_images(sess,image_dict,n_class,INPUT_SIZE)
78         test_accuracy = sess.run(evaluation_step, feed_dict={
79             image_holder: image_batch_test, label_holder: label_batch_test})
80         print(Final test accuracy = %.1f%% % (test_accuracy * 100))

 

『tensorflow』徒手装高达_战斗数据收集模块原型_save&restore

...名‘‘‘importAlexNetasNetimportAlexNet_trainastrainimportrandomimporttensorflowastfIMAGE_PATH=‘./flower 查看详情

『tensorflow』slim高级模块

 『TensorFlow』徒手装高达_主机体框架开光版_Google自家AlexNet集成&slim高级模块学习辅助函数slim.arg_scope()slim.arg_scope可以定义一些函数的默认参数值,在scope内,我们重复用到这些函数时可以不用把所有参数都写一遍,注... 查看详情

新安装完tensorflow后importtensorflowastf报错

>>>importtensorflowastf/opt/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36:FutureWarning:Conversionofthesecondargumentofissubdtypefrom`float`to`np.floating`isdeprecated.Infuture,it 查看详情

张量流中添加方法的问题:AttributeError:模块'tensorflow.python.framework.ops'没有属性'_TensorLike'

】张量流中添加方法的问题:AttributeError:模块\\\'tensorflow.python.framework.ops\\\'没有属性\\\'_TensorLike\\\'【英文标题】:Issuewithaddmethodintensorflow:AttributeError:module\'tensorflow.python.framework.ops\'hasnoattribute\'_TensorLike\'张量流中添 查看详情

tensorflow学习

sess=tf.Session()sess.run() cross_entropy=-tf.reduce_sum(y_*tf.log(y))这个函数是判断真实值y_和预测值y的loss.即一个展示出系统数字识别能力有多糟的值accuracy=tf.reduce_mean(tf.cast(cross_entropy ),"float")这个准确度只是正确识别的数字的百分比... 查看详情

大数据必知必会系列__面试官问能不能徒手画一下你们的项目架构[新星计划]

文章目录引言一.ETL架构及Kudu框架二.OGG及Canal数据同步架构图总结引言大家好,我是ChinaManor,直译过来就是中国码农的意思,俺希望自己能成为国家复兴道路的铺路人,大数据领域的耕耘者,一个平凡而不平庸的人。学习大数据差不多... 查看详情

attributeerror:module‘tensorflow._api.v2.io.gfile’hasnoattribute‘get_filesystem’(代码片段)

AttributeError:module‘tensorflow._api.v2.io.gfile’hasnoattribute'get_filesystem’原因解决方案原因在同一环境中安装pytorch和tensorflow以及tensorboard,产生冲突.解决方案第一种:将tensorflow安装到其他环境中第二种:在代码开头添加:import... 查看详情

徒手撸出一个类flask微框架根据业务进行路由分组

所谓分组就是按照前缀分布映射如:/product/(w+)/(?P<id>d+        # 匹配/product/123123  的前缀比如什么类别,类别下的什么产品等,用requestpath进行正则匹配,所以需要用到正则分组分析我... 查看详情

TensorFlow 问题 google colab ; tensorflow._api.v1.compat.v2' 没有属性 '__internal__

】TensorFlow问题googlecolab;tensorflow._api.v1.compat.v2\\\'没有属性\\\'__internal__【英文标题】:Tensorflowissuegooglecolab;tensorflow._api.v1.compat.v2\'hasnoattribute\'__internal__TensorFlow问题googlecolab;tensorflow._api.v1.compat.v 查看详情

为啥keras安装以后导入失败?

AttributeError:module'tensorflow.compat.v2.__internal__'hasnoattribute'tf2'这是什么问题参考技术Atensorflow和keras版本不匹配,上官网看看keras支持的tensorflow版本,我估计你要重新装tensorflow1 查看详情

深度学习_1_tensorflow_2_数据_文件读取(代码片段)

tensorflow数据读取队列和线程文件读取,图片处理问题:大文件读取,读取速度,在tensorflow中真正的多线程子线程读取数据向队列放数据(如每次100个),主线程学习,不用全部数据读取后,开始学习队列与对垒管理器,线程与协调器tf.FIFOQueue... 查看详情

tensorflow2添加命令使用cpu训练

https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_411.31_win10昨天把GPU版本的tf2.0安装成功之后,现在所有的代码运行居然都在gpu上跑了,并且在对gpu使用情况没有限制的条件下,既然gpu内存跑满了,代码就崩了怎么... 查看详情

查看已安装tensorflow版本

http://blog.csdn.net/u011961856/article/details/76861052由于tensorflow版本不同,可能一些函数的调用也有变换,这时候可能需要查看tensorflow版本,可以在终端输入查询命令如下:pythonimporttensorflowastftf.__version__查询tensorflow安装路径为:tf.__path__ ... 查看详情

tensorflow样例代码分析cifar10

git地址:https://github.com/tensorflow/models.git"""RoutinefordecodingtheCIFAR-10binaryfileformat."""from__future__importabsolute_importfrom__future__importdivisionfrom__future__importprint_functionimpor 查看详情

查看tensorflow的版本和存储位置

1importtensorflowastf2tf.__version__3tf.__path_其中__是两个下划线 查看详情

attributeerror:模块'tensorflow'没有属性'__version__'(代码片段)

...我正在使用Jupyter笔记本。我正在尝试使用以下代码安装TensorFlow1.14.0:!pipinstallimageai!pipuninstall-ytensorflow!pipinstalltensorflow-gpu==1.14.0importtensorflowastfprint(tf.__version__)`并且我收到此错误:模块'tensorflow'没有属性'version'有什么想法吗?谢... 查看详情

tensorflow 网站上的 iris 教程不好用

】tensorflow网站上的iris教程不好用【英文标题】:Theiristutorialintensorflow\'swebsitedoesnotworkwell【发布时间】:2017-11-2922:48:53【问题描述】:代码如下,错误信息如下:from__future__importabsolute_importfrom__future__importdivisionfrom__future__importprin... 查看详情

[dp]最大全零矩阵2015.7.9teste

...快去创造奇迹!”——-ByDoctorZ貌似Z博士正在解析Zvangelion初号机的一些问题。中间遇到了困难。Zvangelion初号机有一块R*S的电路模块被某种UMA感染了。为了方便我们用整数描写叙述每一个单元的零件,即用一个整 查看详情