...
Hello, Using mongo in kubernates environ ment with centos as base. configured 3 replicas with memory limit to 10G Limits: cpu: 5 memory: 10Gi Requests: cpu: 1 memory: 2Gi When the pod started can see it occupies upto 4G of memory and gradually when the parallel connections and queries increases, it occupying around 98% of total memory and not releasing the memory post use. [root@node00 cloud-user]# kubectl top pod -n test | grep mngo e-cmngo-replica-0 92m 9433Mi e-mngo-replica-1 89m 6065Mi e-mngo-replica-2 96m 6482Mi The wiredtiger cache set to default(around 4G ) is there a way we can release the memory post usage ? WitreTiger info: e:PRIMARY> db.serverStatus().wiredTiger.cache { "application threads page read from disk to cache count" : 1599135, "application threads page read from disk to cache time (usecs)" : 1619303577, "application threads page write from cache to disk count" : 1796, "application threads page write from cache to disk time (usecs)" : 564252, "bytes belonging to page images in the cache" : 3797310198, "bytes belonging to the cache overflow table in the cache" : 182, "bytes currently in the cache" : 3809181895, "bytes dirty in the cache cumulative" : 88830916, "bytes not belonging to page images in the cache" : 11871696, "bytes read into cache" : 463809362069, "bytes written from cache" : 92928011,
JIRAUSER1260032 commented on Fri, 21 May 2021 09:50:57 +0000: Below image in my local set up clearly shows the virtual memory increased from 6GB to 8GB during insert operation adn after that remained high at 8GB despite no active connections or any curd operation. This memory came down when a restart of mongo triggered. Why the behavior is like this ? is this not expected that mongo should release the memory if there are no operations ? JIRAUSER1260032 commented on Fri, 21 May 2021 09:25:34 +0000: Hello Dima, still not convinced as I can see the number of active connections are only 2 which include the current terminal. with just 2 active connections the system consuming lesser CPU and higher memory(all available) ! Is this a expected behavior. What is the need of consuming all memory with minimal operation? As you mentioned the crud operation happening continuously can you please let know from which IP the traffic is more , so that I can check locally. Is there a way we can check ? where it shows the active traffic from client?. Also this behavior is consistent . The metrics shared is just for 2 day but mongodb in this lab never releases memory (always 98%). Would like to understand more. PRIMARY> db.serverStatus().connections { "current" : 151, "available" : 838709, "totalCreated" : 98058, "active" : 2 } dmitry.agranat commented on Thu, 20 May 2021 08:57:11 +0000: Thanks ece.sagar@gmail.com for the additional context. Based on what I see in your workload, CRUD operations never really stop. There are some periods of time when CRUD operations significantly decrease for a couple of seconds but this is not expected that the consumed resident memory of a process would be returned to the OS immediately during these 1-2 seconds of low activity. Regarding the comparison between the CPU and memory utilization, this is also not expected that the moment the CPU utilization drops to 50%, the same should happen to memory. I've noticed that about 3% of the total memory is fragmented and we can try to force it to return to the OS more aggressively but I am not sure this aligns with the expectation of reclaiming all the memory. As this works as designed, I will go ahead and close this ticket. The SERVER project is for bugs and feature suggestions for the MongoDB server. If you have further questions about memory, we'd like to encourage you to start by asking our community for help by posting on the MongoDB Developer Community Forums. Regards, Dima JIRAUSER1260032 commented on Wed, 19 May 2021 15:48:49 +0000: Dima, the version is for db servier version eden-csf:PRIMARY> db.version() 4.2.2-3 eden-csf:PRIMARY> It says Percona Server for MongoDB shell version v4.2.2-3 ------------------ Regarding the logs the the time stamp its UTC time zone bash-4.4$ date Wed May 19 15:37:15 UTC 2021 ----------------- CRUD operations were performed for specific time period . say for 1~1.5 hour . In that period the memory usage is high make sense along with cpu utilization is also high. But once the operations completes the memory utilization of the pod should also reduce , like the cpu usages fall back to lowest. for e.g Before starting the crud operation for replica-0 , the cpu is 50 and during crud operation it shooots up to 5008. once its complete the cpu comes back to 50~100 . But the same behavior is not for the memory usage of the pod . the start memory is 5788 and it increases to 9862. but post crud operation its remain same dmitry.agranat commented on Wed, 19 May 2021 13:14:48 +0000: ece.sagar@gmail.com, I have a couple of clarifying questions: What version:4.2.2-3 stands for? What timeline we should be looking at for the expected memory release (in UTC timezone)? I am asking this because between the startup on 2021-05-18T13:10:34.680Z and the end of the data we have 2021-05-19T07:44:34.283Z you query and perform insert/update/delete operations utilizing about ~6 GB of resident memory, so I am not sure why we should expect the memory to be released. JIRAUSER1260032 commented on Wed, 19 May 2021 08:21:26 +0000: metrics.tar uploaded to support uploader location JIRAUSER1260032 commented on Wed, 19 May 2021 04:04:58 +0000: Hello Dima, Will try to get the required logs and upload asap. need some time . Meanwhile can you please update is this the expected behavior of mongo ? As I read couple of blogs stating out of total available memory wiredtiger consumes as per the configuration and the remaining memory consumed by file system cache, as mentioned in https://docs.mongodb.com/manual/core/wiredtiger/#std-label-storage-wiredtiger-journal Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes. Precisely I would want to release the memory consumed by the pods during high usage and release post usage. Regards dmitry.agranat commented on Tue, 18 May 2021 19:26:37 +0000: Hi ece.sagar@gmail.com, Would you please archive (tar or zip) the mongod.log files covering the reported event and the $dbpath/diagnostic.data directory (the contents are described here) and upload them to this support uploader location? Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time. Dima
Start mongo pod Increase the no of records to 50K run locust test with client requests as below number_of_users = 10 hatch_rate = 3 duration = 10m