我正在使用Tensorflow 1.4.0,两个GPU训练。
为什么两个GPU的内存使用情况大不相同?这是GPU的情况:
+-------------------------------+----------------------+----------------------+
| 4 Tesla K80 On | 00000000:00:1B.0 Off | 0 |
| N/A 50C P0 70W / 149W | 8538MiB / 11439MiB | 100% E. Process |
+-------------------------------+----------------------+----------------------+
| 5 Tesla K80 On | 00000000:00:1C.0 Off | 0 |
| N/A 42C P0 79W / 149W | 4442MiB / 11439MiB | 48% E. Process |
+-------------------------------+----------------------+----------------------+
GPU4中使用的Gpu内存是GPU5的两倍。我认为两个GPU中使用的GPU内存应该大致相同。为什么会这样呢?有人帮我吗?非常感谢!
这是代码和两个函数来计算平均梯度:
tower_grads = []
lossList = []
accuracyList = []
for gpu in range(NUM_GPUS):
with tf.device(assign_to_device('/gpu:{}'.format(gpu), ps_device='/cpu:0')):
print '============ GPU {} ============'.format(gpu)
imageBatch, labelBatch, epochNow = read_and_decode_TFRecordDataset(
args.tfrecords, BATCH_SIZE, EPOCH_NUM)
identityPretrainModel = identity_pretrain_inference.IdenityPretrainNetwork(IS_TRAINING,
BN_TRAINING, CLASS_NUM, DROPOUT_TRAINING)
logits = identityPretrainModel.inference(
imageBatch)
losses = identityPretrainModel.cal_loss(logits, labelBatch)
accuracy = identityPretrainModel.cal_accuracy(logits, labelBatch)
optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE)
grads_and_vars = optimizer.compute_gradients(losses)
lossList.append(losses)
accuracyList.append(accuracy)
tower_grads.append(grads_and_vars)
grads_and_vars = average_gradients(tower_grads)
train = optimizer.apply_gradients(grads_and_vars)
global_step = tf.train.get_or_create_global_step()
incr_global_step = tf.assign(global_step, global_step + 1)
losses = sum(lossList) / NUM_GPUS
accuracy = sum(accuracyList) / NUM_GPUS
def assign_to_device(device, ps_device='/cpu:0'):
def _assign(op):
node_def = op if isinstance(op, tf.NodeDef) else op.node_def
if node_def.op in PS_OPS:
return ps_device
else:
return device
return _assign
def average_gradients(tower_grads):
average_grads = []
for grad_and_vars in zip(*tower_grads):
# Note that each grad_and_vars looks like the following:
# ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
grads = []
for g, _ in grad_and_vars:
# Add 0 dimension to the gradients to represent the tower.
expanded_g = tf.expand_dims(g, 0)
# Append on a 'tower' dimension which we will average over below.
grads.append(expanded_g)
# Average over the 'tower' dimension.
grad = tf.concat(grads, 0)
grad = tf.reduce_mean(grad, 0)
# Keep in mind that the Variables are redundant because they are shared
# across towers. So .. we will just return the first tower's pointer to
# the Variable.
v = grad_and_vars[0][1]
grad_and_var = (grad, v)
average_grads.append(grad_and_var)
return average_grads
参考方案
多GPU代码来自:multigpu_cnn.py。原因是缺少第124行with tf.device('/cpu:0'):
!在这种情况下,所有操作都放置在GPU0上。因此,gpu0上的内存成本比其他内存成本高得多。
我在Angular工作,正在使用Http请求和响应。是否可以在“响应”中发送多个参数。角度文件:this.http.get("api/agent/applicationaware").subscribe((data:any)... python文件:def get(request): ... return Response(seriali…
Python exchangelib在子文件夹中读取邮件 - python我想从Outlook邮箱的子文件夹中读取邮件。Inbox ├──myfolder 我可以使用account.inbox.all()阅读收件箱,但我想阅读myfolder中的邮件我尝试了此页面folder部分中的内容,但无法正确完成https://pypi.python.org/pypi/exchangelib/ 参考方案 您需要首先掌握Folder的myfo…
R'relaimpo'软件包的Python端口 - python我需要计算Lindeman-Merenda-Gold(LMG)分数,以进行回归分析。我发现R语言的relaimpo包下有该文件。不幸的是,我对R没有任何经验。我检查了互联网,但找不到。这个程序包有python端口吗?如果不存在,是否可以通过python使用该包? python参考方案 最近,我遇到了pingouin库。
Python GPU资源利用 - python我有一个Python脚本在某些深度学习模型上运行推理。有什么办法可以找出GPU资源的利用率水平?例如,使用着色器,float16乘法器等。我似乎在网上找不到太多有关这些GPU资源的文档。谢谢! 参考方案 您可以尝试在像Renderdoc这样的GPU分析器中运行pyxthon应用程序。它将分析您的跑步情况。您将能够获得有关已使用资源,已用缓冲区,不同渲染状态上…
Python ThreadPoolExecutor抑制异常 - pythonfrom concurrent.futures import ThreadPoolExecutor, wait, ALL_COMPLETED def div_zero(x): print('In div_zero') return x / 0 with ThreadPoolExecutor(max_workers=4) as execut…