Deadletter and Aggregation again

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Deadletter and Aggregation again

Thomas Thiele
Hi,

to be able to save the original message to dead letter directory even in multiple nested splits I store it at a property.

.process(exchange -> {
          final Message orgMessage = exchange.getUnitOfWork().getOriginalInMessage();
          exchange.setProperty(Constants.PROPERTY_ORIGNAL_MESSAGE, orgMessage);
})

And read it later in errorhandling.
This is the code for aggregation:

from(EP_AGGREGATION_ SPLIT).routeId(SPLIT_AGGREGATION_ROUTE_ID)  
        .throwException(IllegalArgumentException.class, "DEBUG EX1")
        .aggregate(header(Constants.PROPERTY_CASE_ID), new ZipAggregationStrategy())
        .completionTimeout(10 * 1000 * 1)
        .completion(header(ZipAggregationStrategy.AGG_PROPERTY_COMPLETED).isEqualTo(true))
        .throwException(IllegalArgumentException.class, "DEBUG EX2")

when I throw an exception before aggregate (EX1) it works.
But after (EX2) ist will not. (of course the first throwExeption-statement is comment out)
Problem is, the original file is already deleted /moved into recovery.  
Why?
How can I prevent this?

Why is it not easy possible to store the original input into a dead letter dir or queue regardless how many nested splits and aggregations in case somewhere an exception is thrown?
Just simple and easy.

Regards Thomas
Reply | Threaded
Open this post in threaded view
|

Re: Deadletter and Aggregation again

Claus Ibsen-2
Hi

The aggregator is a 2 phased EIP so what comes out of the aggregator
is not tied to the input. That is by design.
If you want a fork / join kinda pattern (composed message processor is
the EIP name) then you can do that from the splitter only which has
aggregation strategy built-in.

On Mon, Jan 6, 2020 at 6:08 PM <[hidden email]> wrote:

>
> Hi,
>
> to be able to save the original message to dead letter directory even in multiple nested splits I store it at a property.
>
> .process(exchange -> {
>           final Message orgMessage = exchange.getUnitOfWork().getOriginalInMessage();
>           exchange.setProperty(Constants.PROPERTY_ORIGNAL_MESSAGE, orgMessage);
> })
>
> And read it later in errorhandling.
> This is the code for aggregation:
>
> from(EP_AGGREGATION_ SPLIT).routeId(SPLIT_AGGREGATION_ROUTE_ID)
>         .throwException(IllegalArgumentException.class, "DEBUG EX1")
>         .aggregate(header(Constants.PROPERTY_CASE_ID), new ZipAggregationStrategy())
>         .completionTimeout(10 * 1000 * 1)
>         .completion(header(ZipAggregationStrategy.AGG_PROPERTY_COMPLETED).isEqualTo(true))
>         .throwException(IllegalArgumentException.class, "DEBUG EX2")
>
> when I throw an exception before aggregate (EX1) it works.
> But after (EX2) ist will not. (of course the first throwExeption-statement is comment out)
> Problem is, the original file is already deleted /moved into recovery.
> Why?
> How can I prevent this?
>
> Why is it not easy possible to store the original input into a dead letter dir or queue regardless how many nested splits and aggregations in case somewhere an exception is thrown?
> Just simple and easy.
>
> Regards Thomas



--
Claus Ibsen
-----------------
http://davsclaus.com @davsclaus
Camel in Action 2: https://www.manning.com/ibsen2
Reply | Threaded
Open this post in threaded view
|

AW: Deadletter and Aggregation again

Thomas Thiele
>The aggregator is a 2 phased EIP so what comes out of the aggregator is not tied to the input. That is by design.

Is there a way to prevent or control the transation, when original input (file) is deleted/moved?


-----Ursprüngliche Nachricht-----
Von: Claus Ibsen <[hidden email]>
Gesendet: Montag, 6. Januar 2020 19:50
An: [hidden email]
Betreff: Re: Deadletter and Aggregation again

Hi

The aggregator is a 2 phased EIP so what comes out of the aggregator is not tied to the input. That is by design.
If you want a fork / join kinda pattern (composed message processor is the EIP name) then you can do that from the splitter only which has aggregation strategy built-in.

On Mon, Jan 6, 2020 at 6:08 PM <[hidden email]> wrote:

>
> Hi,
>
> to be able to save the original message to dead letter directory even in multiple nested splits I store it at a property.
>
> .process(exchange -> {
>           final Message orgMessage = exchange.getUnitOfWork().getOriginalInMessage();
>           exchange.setProperty(Constants.PROPERTY_ORIGNAL_MESSAGE,
> orgMessage);
> })
>
> And read it later in errorhandling.
> This is the code for aggregation:
>
> from(EP_AGGREGATION_ SPLIT).routeId(SPLIT_AGGREGATION_ROUTE_ID)
>         .throwException(IllegalArgumentException.class, "DEBUG EX1")
>         .aggregate(header(Constants.PROPERTY_CASE_ID), new ZipAggregationStrategy())
>         .completionTimeout(10 * 1000 * 1)
>         .completion(header(ZipAggregationStrategy.AGG_PROPERTY_COMPLETED).isEqualTo(true))
>         .throwException(IllegalArgumentException.class, "DEBUG EX2")
>
> when I throw an exception before aggregate (EX1) it works.
> But after (EX2) ist will not. (of course the first
> throwExeption-statement is comment out) Problem is, the original file is already deleted /moved into recovery.
> Why?
> How can I prevent this?
>
> Why is it not easy possible to store the original input into a dead letter dir or queue regardless how many nested splits and aggregations in case somewhere an exception is thrown?
> Just simple and easy.
>
> Regards Thomas



--
Claus Ibsen
-----------------
http://davsclaus.com @davsclaus
Camel in Action 2: https://www.manning.com/ibsen2
Reply | Threaded
Open this post in threaded view
|

Re: Deadletter and Aggregation again

Claus Ibsen-2
Hi

You can set it to noop=true, and then delete/move it from after
aggregator in a custom bean/processor

On Tue, Jan 7, 2020 at 10:04 AM <[hidden email]> wrote:

>
> >The aggregator is a 2 phased EIP so what comes out of the aggregator is not tied to the input. That is by design.
>
> Is there a way to prevent or control the transation, when original input (file) is deleted/moved?
>
>
> -----Ursprüngliche Nachricht-----
> Von: Claus Ibsen <[hidden email]>
> Gesendet: Montag, 6. Januar 2020 19:50
> An: [hidden email]
> Betreff: Re: Deadletter and Aggregation again
>
> Hi
>
> The aggregator is a 2 phased EIP so what comes out of the aggregator is not tied to the input. That is by design.
> If you want a fork / join kinda pattern (composed message processor is the EIP name) then you can do that from the splitter only which has aggregation strategy built-in.
>
> On Mon, Jan 6, 2020 at 6:08 PM <[hidden email]> wrote:
> >
> > Hi,
> >
> > to be able to save the original message to dead letter directory even in multiple nested splits I store it at a property.
> >
> > .process(exchange -> {
> >           final Message orgMessage = exchange.getUnitOfWork().getOriginalInMessage();
> >           exchange.setProperty(Constants.PROPERTY_ORIGNAL_MESSAGE,
> > orgMessage);
> > })
> >
> > And read it later in errorhandling.
> > This is the code for aggregation:
> >
> > from(EP_AGGREGATION_ SPLIT).routeId(SPLIT_AGGREGATION_ROUTE_ID)
> >         .throwException(IllegalArgumentException.class, "DEBUG EX1")
> >         .aggregate(header(Constants.PROPERTY_CASE_ID), new ZipAggregationStrategy())
> >         .completionTimeout(10 * 1000 * 1)
> >         .completion(header(ZipAggregationStrategy.AGG_PROPERTY_COMPLETED).isEqualTo(true))
> >         .throwException(IllegalArgumentException.class, "DEBUG EX2")
> >
> > when I throw an exception before aggregate (EX1) it works.
> > But after (EX2) ist will not. (of course the first
> > throwExeption-statement is comment out) Problem is, the original file is already deleted /moved into recovery.
> > Why?
> > How can I prevent this?
> >
> > Why is it not easy possible to store the original input into a dead letter dir or queue regardless how many nested splits and aggregations in case somewhere an exception is thrown?
> > Just simple and easy.
> >
> > Regards Thomas
>
>
>
> --
> Claus Ibsen
> -----------------
> http://davsclaus.com @davsclaus
> Camel in Action 2: https://www.manning.com/ibsen2



--
Claus Ibsen
-----------------
http://davsclaus.com @davsclaus
Camel in Action 2: https://www.manning.com/ibsen2
Reply | Threaded
Open this post in threaded view
|

AW: Deadletter and Aggregation again

Thomas Thiele
Thats is the hard method. And I am shy to use plain programming in a framework.

Regarding split:
the problem is that is not a simple spilt.

1. Zip is splitted to XML, TIFF0, TIFF1, ....
2. XML is splitted to DATA0, DATA1, ....

And TIFF0 and DATA0, (TIFF1, DATA1), ... is combined then. So aggregation is not (directly) bound to a splitter.

-----Ursprüngliche Nachricht-----
Von: Claus Ibsen <[hidden email]>
Gesendet: Dienstag, 7. Januar 2020 10:08
An: [hidden email]
Betreff: Re: Deadletter and Aggregation again

Hi

You can set it to noop=true, and then delete/move it from after aggregator in a custom bean/processor

On Tue, Jan 7, 2020 at 10:04 AM <[hidden email]> wrote:

>
> >The aggregator is a 2 phased EIP so what comes out of the aggregator is not tied to the input. That is by design.
>
> Is there a way to prevent or control the transation, when original input (file) is deleted/moved?
>
>
> -----Ursprüngliche Nachricht-----
> Von: Claus Ibsen <[hidden email]>
> Gesendet: Montag, 6. Januar 2020 19:50
> An: [hidden email]
> Betreff: Re: Deadletter and Aggregation again
>
> Hi
>
> The aggregator is a 2 phased EIP so what comes out of the aggregator is not tied to the input. That is by design.
> If you want a fork / join kinda pattern (composed message processor is the EIP name) then you can do that from the splitter only which has aggregation strategy built-in.
>
> On Mon, Jan 6, 2020 at 6:08 PM <[hidden email]> wrote:
> >
> > Hi,
> >
> > to be able to save the original message to dead letter directory even in multiple nested splits I store it at a property.
> >
> > .process(exchange -> {
> >           final Message orgMessage = exchange.getUnitOfWork().getOriginalInMessage();
> >           exchange.setProperty(Constants.PROPERTY_ORIGNAL_MESSAGE,
> > orgMessage);
> > })
> >
> > And read it later in errorhandling.
> > This is the code for aggregation:
> >
> > from(EP_AGGREGATION_ SPLIT).routeId(SPLIT_AGGREGATION_ROUTE_ID)
> >         .throwException(IllegalArgumentException.class, "DEBUG EX1")
> >         .aggregate(header(Constants.PROPERTY_CASE_ID), new ZipAggregationStrategy())
> >         .completionTimeout(10 * 1000 * 1)
> >         .completion(header(ZipAggregationStrategy.AGG_PROPERTY_COMPLETED).isEqualTo(true))
> >         .throwException(IllegalArgumentException.class, "DEBUG EX2")
> >
> > when I throw an exception before aggregate (EX1) it works.
> > But after (EX2) ist will not. (of course the first
> > throwExeption-statement is comment out) Problem is, the original file is already deleted /moved into recovery.
> > Why?
> > How can I prevent this?
> >
> > Why is it not easy possible to store the original input into a dead letter dir or queue regardless how many nested splits and aggregations in case somewhere an exception is thrown?
> > Just simple and easy.
> >
> > Regards Thomas
>
>
>
> --
> Claus Ibsen
> -----------------
> http://davsclaus.com @davsclaus
> Camel in Action 2: https://www.manning.com/ibsen2



--
Claus Ibsen
-----------------
http://davsclaus.com @davsclaus
Camel in Action 2: https://www.manning.com/ibsen2