An aggregation takes tens or hundreds of seconds, compare to default
I tested our system on EC2 r3.8xlarge instances, after running 3days it
became very slow.
Could you clarify what you mean by slow and how you are measuring this?
Average CPUs/RAM/disks are in low usage, but I saw random one of the CPUshttp://repo.mongodb.org/apt/ubuntu/dists/trusty/ all defaults as sharding
will become 100% for couple seconds.
CPU usage could be related to index builds, document updates, aggregation
queries, compression/decompression of documents in the WiredTiger storage
engine, or any other normal operations. Is there any pattern to the CPU
spikes that you can relate to some specific operation?
Could you please specify:
- your operating system and MongoDB version
- the storage engine used (MMAPv1 or WiredTiger)
- whether the system is running a single mongod, or if there are
multiple mongod running on the server
- whether there are other processes running in the system that could
create a resource contention (e.g. other database servers, web servers,
A single mongod v3.2.4 from
After googled, looks like $addToSet is a slower operation.
Does it because of $addToSet operation? or any other operation will let
mongod stick in tcmalloc.
Why do you suspect $addToSet is the cause?
Knowing a little about your use case might help:And historical hourly document has per minute sub-documents also, each
- can you provide an example document?
- how many elements are typically in your arrays when you use $addToSet
Each document has 3 arrays, each update will $addToSet to those 3 arrays.
We planed to separate insert/update current (keep them in cache) and
- can you provide example output for slow queries (log lines and
ideally query with explain(true) )?
Also, may I ask what tooling you use to monitor the performance of your
MongoDB deployment? You might want to check out MongoDB Cloud Manager
<https://www.mongodb.com/cloud/>, which collects detailed performance
metrics. Note: Cloud Manager is a freemium service with a 30-day free trial