Yolo Darkflow错误。 tensorflow.python.framework.errors_impl.InvalidArgumentError:无效的名称 - python

运行代码时,我得到以下输出:

 %Run run_img.py
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
  return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
  return f(*args, **kwds)
Traceback (most recent call last):
  File "/home/pi/Desktop/darkflow-master/run_img.py", line 9, in <module>
    from darkflow.net.build import TFNet
  File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 5, in <module>
    from .ops import op_create, identity
  File "/home/pi/Desktop/darkflow-master/darkflow/net/ops/__init__.py", line 1, in <module>
    from .simple import *
  File "/home/pi/Desktop/darkflow-master/darkflow/net/ops/simple.py", line 1, in <module>
    import tensorflow.contrib.slim as slim
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/__init__.py", line 40, in <module>
    from tensorflow.contrib import distribute
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/distribute/__init__.py", line 33, in <module>
    from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in <module>
    from tensorflow.contrib.tpu.python.ops import tpu_ops
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/tpu/__init__.py", line 69, in <module>
    from tensorflow.contrib.tpu.python.ops.tpu_ops import *
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/ops/tpu_ops.py", line 39, in <module>
    resource_loader.get_path_to_datafile("_tpu_ops.so"))
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/util/loader.py", line 56, in load_op_library
    ret = load_library.load_op_library(path)
  File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid name: 
An op that loads optimization parameters into HBM for embedding. Must be
preceded by a ConfigureTPUEmbeddingHost op that sets up the correct
embedding table configuration. For example, this op is used to install
parameters that are loaded from a checkpoint before a training loop is
executed.

parameters: A tensor containing the initial embedding table parameters to use in embedding
lookups using the Adagrad optimization algorithm.
accumulators: A tensor containing the initial embedding table accumulators to use in embedding
lookups using the Adagrad optimization algorithm.
table_name: Name of this table; must match a name in the
  TPUEmbeddingConfiguration proto (overrides table_id).
num_shards: Number of shards into which the embedding tables are divided.
shard_id: Identifier of shard for this operation.
table_id: Index of this table in the EmbeddingLayerConfiguration proto
  (deprecated).
 (Did you use CamelCase?); in OpDef: name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" } input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" default_value { i: -1 } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" has_minimum: true minimum: -1 } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" default_value { s: "" } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" } summary: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n  TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n  (deprecated).\n" is_stateful: true
>>> 

我已经训练了自己的模型,并且正在树莓派3模型B上运行它
相同的确切代码在我的Windows计算机上运行。它曾经在这个确切的树莓派上工作。我在中间刷了卡。

我认为错误是在导入darkflow.net.build时

我在github(3月16日)上克隆了最新的分支,并使用

python3 setup.py build_ext --inplace

我正在尝试运行的代码:

import cv2
from darkflow.net.build import TFNet
import numpy as np


from keras.models import load_model    
model=load_model('custom-2/svhn-multi-digit-24-09-F1-ds.h5')

option = {
    'model': 'custom-2/yolo-obj.cfg',
    'load': 'custom-2/yolo-obj_2200.weights',
    'threshold': 0.30,
    'gpu': 1.0
}

tfnet = TFNet(option)
colors = [tuple(255 * np.random.rand(3)) for i in range(5)]
frame=cv2.imread("custom-2/3.jpg",1)
frame=cv2.resize(frame,None,fx=0.5,fy=0.5)
#frame=cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

results = tfnet.return_predict(frame)
for color, result in zip(colors, results):
    tl = (result['topleft']['x'], result['topleft']['y'])
    br = (result['bottomright']['x'], result['bottomright']['y'])
    img=frame[tl[1]:br[1],tl[0]:br[0]]
    img=cv2.resize(img,(64,64))
    img=img[np.newaxis,...]
    res=model.predict(img)
    label = str(np.argmax(res[0]))+","+str(np.argmax(res[1]))
    frame = cv2.rectangle(frame, tl, br, color, 7)
    frame = cv2.putText(frame, label, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 255), 2)

cv2.imshow('frame', frame)
cv2.waitKey(0);
cv2.destroyAllWindows()

from keras import backend as K
K.clear_session()

python参考方案

试试这个解决方案。我遇到了与您完全相同的问题,这为我解决了。

$ sudo apt-get install python-pip python3-pip
$ sudo pip3 uninstall tensorflow
$ git clone https://github.com/PINTO0309/Tensorflow-bin.git
$ cd Tensorflow-bin
$ sudo pip3 install tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl

或将Tensorflow降级到1.11.0的任何其他选择。

使用Python和SQLite验证SQL查询语法 - python

我正在使用Python从.sql文件执行一系列SQLite查询。似乎应该有一种很好的方法来检查查询的语法,以验证它们是否均正确编写并可以执行。这一点特别重要,因为这些查询将同时针对MySQL和SQLite,因此应捕获并标记任何特定于MySQL的语法。当然,我可以只执行查询并查找异常。但是似乎应该有一个更好的方法。 python大神给出的解决方案 我已经解决了…

Python sqlite3数据库已锁定 - python

我在Windows上使用Python 3和sqlite3。我正在开发一个使用数据库存储联系人的小型应用程序。我注意到,如果应用程序被强制关闭(通过错误或通过任务管理器结束),则会收到sqlite3错误(sqlite3.OperationalError:数据库已锁定)。我想这是因为在应用程序关闭之前,我没有正确关闭数据库连接。我已经试过了: connectio…

有没有一种方法可以有效地矢量化图像上的Tensorflow操作? - python

Tensorflow有大量的变换,可以应用于表示图像([高度,宽度,深度])(例如tf.image.rot90()或tf.image.random_flip_left_right())的3D张量。我知道它们应与队列一起使用,因此它们只能在一个图像上运行。但是,是否有一种方法可以对操作进行矢量化处理,以将4D张量([batch_size,height,widt…

TensorFlow操作,在官方API中找不到 - python

最近,我尝试重复学习Nvidia在GitHub上发布的代码-progressive_growing_of_gans。但是,我发现以下几种基于官方API找不到的操作参考。feed_dict = {} setter = tf.assign(var, tf.placeholder(var.dtype, var.shape, 'new_value'…

生成器表达式组合线理解执行步骤 - python

                                                                                                                    [print(x) for x in ((x ** 2) for x in range(5))] 输出0 1 4 9 16 [N…