小编给大家分享一下解决mongodb内存不足的方法,希望大家阅读完这篇文章后大所收获,下面让我们一起去探讨方法吧!

mongodb每一个文档默认只有16M。聚合的结果是一个BSON文档,当超过16M大小时,就会报内存不够错误。

exceeded memory limit for $group.but didn't allow external sort.

可以采用打开使用磁盘来解决大小问题。例如

db.flowlog.aggregate([{$group:{_id:"$_id"}}], {allowDiskUse: true})

java代码片段

AggregationOptionsoptions=newAggregationOptions.Builder().allowDiskUse(true).build();Aggregationagg=Aggregation.newAggregation().withOptions(options);

但是如果结果集超过了16M,那么依然会报错误。

采用一个下面的聚合方法

Aggregationagg=Aggregation.newAggregation(Aggregation.group(field1,field2,field3).sum(field4).as("sampleField1").sum(field5).as("sampleField2"),Aggregation.project(field4,field5),newAggregationOperation(){@OverridepublicDBObjecttoDBObject(AggregationOperationContextcontext){returnnewBasicDBObject("$out","test");}}).withOptions(options);mongo.aggregate(agg,sourceCollection,Test.class);

如果要在聚合的时候增加一个常量,可采用以下形式

Aggregationagg=Aggregation.newAggregation(Aggregation.group(,OnofflineUserHistoryField.MAC,StalogField.UTC_CODE).sum(OnofflineUserHistoryField.WIFI_UP_DOWN).as(OnofflineUserHistoryField.WIFI_UP_DOWN).sum(OnofflineUserHistoryField.ACTIVE_TIME).as(OnofflineUserHistoryField.ACTIVE_TIME),Aggregation.project("mac","buildingId","utcCode",OnofflineUserHistoryField.ACTIVE_TIME,OnofflineUserHistoryField.WIFI_UP_DOWN).and(newAggregationExpression(){@OverridepublicDBObjecttoDbObject(AggregationOperationContextcontext){returnnewBasicDBObject("$cond",newObject[]{newBasicDBObject("$eq",newObject[]{"$tenantId",0}),20161114,20161114});}}).as("day").andExclude("_id"),            或者and(newAggregationExpression(){             @Override             publicDBObjecttoDbObject(AggregationOperationContextcontext){                returnnewBasicDBObject("$add",newObject[]{20141114});            }             }).as("day").andExclude("_id"),            newAggregationOperation(){@OverridepublicDBObjecttoDBObject(AggregationOperationContextcontext){returnnewBasicDBObject("$out","dayStaInfoTmp");}}).withOptions(options);

看完了这篇文章,相信你对解决mongodb内存不足的方法有了一定的了解,想了解更多相关知识,欢迎关注亿速云行业资讯频道,感谢各位的阅读!