MongoDB v5.2 is available!

Today marks the release of MongoDB 5.2 — the 2nd rapid release in the 5.x series — as well as additional integrations that bolster our overall platform. 

Keep in mind that these Developer Releases are designed to give early access to important features destined for the next Major Release (v6.0, later this year). The Enterprise Builds are not officially supported by MongoDB, but v5.2 is also available (and supported!) within MongoDB Atlas.

Here are some details on the newest features and capabilities in v5.2:

  • Time Series Enhancements 
    • Time series workloads often create and analyze a staggering amount of data – e.g. sensors that are continuously generating new data for every measurement. This makes managing your storage footprint and resource efficiency a challenge. With new columnar compression for time series collections in MongoDB, users can now dramatically reduce their storage footprint by as much as 70% using best-in-class compression algorithms. 
  • Improving Query Ergonomics
    • New query operators that make developers’ lives easier with more efficient and streamlined queries. 
      • To satisfy a long-running feature request, we’ve added the new $sortArray operator, which allows users to easily sort the elements in an array – whether it’s an array of scalars or an array of documents  – in an optimized and intuitive way. 
      • New operators such as $topN, $bottomN, $maxN, $minN, $firstN, $lastN allow users to quickly retrieve subsets of their data that are of interest – e.g. return the top 5 performers for each region. 
  • Long-Running Analytics Queries 
    • Users can now execute more complex analytical queries without having to rely on expensive ETL to a separate dedicated analytics database. Now out of beta, long-running snapshot queries can run for hours against a globally and transactionally consistent snapshot of a customer’s data. Snapshot queries can span multiple shards and be isolated to secondaries to avoid resource contention with operational workloads. Typical use cases include end-of-day reconciliation and reporting, ad-hoc data exploration, data mining, and more. 
  • Improving Operational Resilience 
    • When users add a node to their replica set – either for read scaling, reducing read latency, or for improved operational resilience – they rely on an existing process called initial sync, where the new replica set member gets a full copy of the data from an existing member. For users trying to sync large data sets, this has been historically very slow. With the introduction of initial sync via file copy, we’re giving users another option that can be up to four times faster, according to our testing. 

You may also like...