我正在从API提取JSON数据,输出如下:
[[{'employeeId': 1, 'lastName': 'Smith'}, {'employeeId': 2, 'lastName': 'Flores'}]]
列表中大约有25万个对象。我能够遍历列表中的对象,并通过PyMongo以这种方式执行update_one
:
json_this = json.dumps(json_list[0])
json_that = json.loads(json_this)
for x in json_that:
collection.update_one({"employeeId": x['employeeId']},{"$set": x},upsert=True)
但是,如果有250k条记录,则需要花费很长时间。我正在尝试使用update_many
,但无法弄清楚如何正确转换/格式化此JSON列表以使用update_many
函数。任何指导将不胜感激。
参考方案
将250K文档更新/插入数据库可能是一项艰巨的任务,您不能使用update_many
作为筛选器查询,并且每个字典之间的更新值也会发生变化。因此,使用下面的查询至少可以避免多次调用数据库,但是我不确定这对您的情况有多好,请注意,我是python的初学者,这是一个基本的代码,可以帮助您:
批量操作最好的方法是PyMongo-bulk,由于.bulkWrite()的局限性,我们将250K条记录分成多个块:
from pymongo import UpdateOne
from pprint import pprint
import sys
json_this = json.dumps(json_list[0])
json_that = json.loads(json_this)
primaryBulkArr = []
secondaryBulkArr = []
thirdBulkArr = []
## Here we're splicing 250K records into 3 arrays, in case if we want to finish a chunk at a time,
# No need to splice all at once - Finish end - to - end for one chunk & restart the process for another chunk from the index of the list where you left previously
for index, x in enumerate(json_that):
if index < 90000:
primaryBulkArr.append(
UpdateOne({"employeeId": x['employeeId']}, {'$set': x}, upsert=True))
elif index > 90000 and index < 180000:
secondaryBulkArr.append(
UpdateOne({"employeeId": x['employeeId']}, {'$set': x}, upsert=True))
else:
thirdBulkArr.append(
UpdateOne({"employeeId": x['employeeId']}, {'$set': x}, upsert=True))
## Reason why I've spliced into 3 arrays is may be you can run below code in parallel if your DB & application servers can take it,
# At the end of the day irrespective of time taken only 3 DB calls are needed & this bulk op is much efficient.
try:
result = collection.bulk_write(bulkArr)
## result = db.test.bulk_write(bulkArr, ordered=False)
# Opt for above if you want to proceed on all dictionaries to be updated, even though an error occured in between for one dict
pprint(result.bulk_api_result)
except:
e = sys.exc_info()[0]
print("An exception occurred ::", e) ## Get the ids failed if any & do re-try
在返回'Response'(Python)中传递多个参数 - python我在Angular工作,正在使用Http请求和响应。是否可以在“响应”中发送多个参数。角度文件:this.http.get("api/agent/applicationaware").subscribe((data:any)... python文件:def get(request): ... return Response(seriali…
R'relaimpo'软件包的Python端口 - python我需要计算Lindeman-Merenda-Gold(LMG)分数,以进行回归分析。我发现R语言的relaimpo包下有该文件。不幸的是,我对R没有任何经验。我检查了互联网,但找不到。这个程序包有python端口吗?如果不存在,是否可以通过python使用该包? python参考方案 最近,我遇到了pingouin库。
Python ThreadPoolExecutor抑制异常 - pythonfrom concurrent.futures import ThreadPoolExecutor, wait, ALL_COMPLETED def div_zero(x): print('In div_zero') return x / 0 with ThreadPoolExecutor(max_workers=4) as execut…
如何用'-'解析字符串到节点js本地脚本? - python我正在使用本地节点js脚本来处理字符串。我陷入了将'-'字符串解析为本地节点js脚本的问题。render.js:#! /usr/bin/env -S node -r esm let argv = require('yargs') .usage('$0 [string]') .argv; console.log(argv…
TypeError:'str'对象不支持项目分配,带有json文件的python - python以下是我的代码import json with open('johns.json', 'r') as q: l = q.read() data = json.loads(l) data['john'] = '{}' data['john']['use…