[camel] branch master updated: mongodb3 - update docs about streaming data, remove DBCursor (#2561)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[camel] branch master updated: mongodb3 - update docs about streaming data, remove DBCursor (#2561)

acosentino
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git


The following commit(s) were added to refs/heads/master by this push:
     new 2074f94  mongodb3 - update docs about streaming data, remove DBCursor (#2561)
2074f94 is described below

commit 2074f94109d0f1309ea3dcbb3fa519937c09c8f4
Author: Peter Nagy <[hidden email]>
AuthorDate: Thu Oct 11 19:19:25 2018 +0200

    mongodb3 - update docs about streaming data, remove DBCursor (#2561)
---
 .../src/main/docs/mongodb3-component.adoc          | 29 ++++++----------------
 1 file changed, 8 insertions(+), 21 deletions(-)

diff --git a/components/camel-mongodb3/src/main/docs/mongodb3-component.adoc b/components/camel-mongodb3/src/main/docs/mongodb3-component.adoc
index a34daf2..dc77707 100644
--- a/components/camel-mongodb3/src/main/docs/mongodb3-component.adoc
+++ b/components/camel-mongodb3/src/main/docs/mongodb3-component.adoc
@@ -658,28 +658,11 @@ Supports the following IN message headers:
 |`CamelMongoDbAllowDiskUse` |`MongoDbConstants.ALLOW_DISK_USE` | Enable aggregation pipeline stages to write data to temporary files. |boolean/Boolean
 |=======================================================================
 
-Efficient retrieval is supported via outputType=MongoIterable.
+By default a List of all results is returned. This can be heavy on memory depending on the size of the results. A safer alternative is to set your
+outputType=MongoIterable. The next Processor will see an iterable in the message body allowing it to step through the results one by one. Thus setting
+a batch size and returning an iterable allows for efficient retrieval and processing of the result.
 
-You can also "stream" the documents returned from the server into your route by including outputType=DBCursor (Camel 2.21+) as an endpoint option
-which may prove simpler than setting the above headers. This hands your Exchange the DBCursor from the Mongo driver, just as if you were executing
-the aggregate() within the Mongo shell, allowing your route to iterate over the results. By default and without this option, this component will load
-the documents from the driver's cursor into a List and return this to your route - which may result in a large number of in-memory objects. Remember,
-with a DBCursor do not ask for the number of documents matched - see the MongoDB documentation site for details.
-
-Example with option outputType=MongoIterable and batch size:
-
-[source,java]
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
-List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist",
-        group("$scientist", sum("count", 1)));
-from("direct:aggregate")
-    .setHeader(MongoDbConstants.BATCH_SIZE).constant(10)
-    .setBody().constant(aggregate)
-    .to("mongodb3:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable")
-    .to("mock:resultAggregate");
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-Example with outputType=DBCursor and batch size showing how to iterate over the cursor's data:
+An example would look like:
 
 [source,java]
 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -694,6 +677,10 @@ from("direct:aggregate")
     .to("mock:resultAggregate");
 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
+Note that calling `.split(body())` is enough to send the entries down the route one-by-one, however it would still load all the entries into memory first.
+Calling `.streaming()` is thus required to load data into memory by batches.
+
+
 ===== getDbStats
 
 Equivalent of running the `db.stats()` command in the MongoDB shell,