Just a quick note: this combination does not work!

The JVM switch -XX:+UseStringCache doesn't play nicely with log4j MDC logging. According to the JVM docs "Enables caching of commonly allocated strings.". But it makes the MDC values simply vanish from the logs.

It also caused distorsions in the log messages themselves. For example, I started seeing the literal "nse=HTTP/1.1 200" randomly pop up at the end of my log statements. Very strange. I assume it has to do with memory pointers or the like.

Take into account that HotSpot JVM -XX options are defined as experimental by nature, so be careful when you play with them.

One of the very first lessons you learn straight after starting your journey with Apache ActiveMQ is what Producer Flow Control is (abbreviated as PFC hereafter). And you learn it the hard way!

Producer Flow Control is typically described as the "culprit" for seemingly frozen brokers and stuck queues and topics; your producer app is trying to dispatch messages to the broker but something exogenous and exotic is blocking it.

As soon as you learn about the existence of this mechanism, you scream out loud: "#*@!&, why would someone come up with such a silly functionality?". Read on to understand why PFC makes so much sense in a messaging system.

So, what really is Producer Flow Control?

Producer Flow Control is both: (1) a mechanism of defence for the broker and (2) a method to guarantee operational continuity in the face of unplanned messaging volumes. It slows down or temporarily halts fast producers in a non-intrusive manner while slow consumers are happily digesting messages at their own pace.

Without Producer Flow Control, you incur in risks such as a destination overflowing the JVM heap and blowing up with an java.lang.OutOfMemoryError or a single destination hijacking broker resources and penalising all other destinations, etc.

Having said that, there is another option instead of silently halting producers: to send back an exception to the producer. But that method is far more intrusive and requires the client's source code to cater for this scenario. Mind you, ActiveMQ can be configured to take this approach too.

Note that if Producer Flow Control is being triggered too often, this has a hidden meaning. It means that your actual messaging volume/rates requirements are underserved by the configuration/infrastructure you've put in place. You will need to optimise your configuration, scale up or scale out (vertical or horizontal scaling).

So now you see why PFC makes a lot of sense in a messaging system. Fair enough, another topic altogether is whether PFC should be enabled out-of-the-box and if the default 1mb memory limit per destination is adequate. But that's a different story.

Point #1: Resources are not infinite

See, messaging systems are highly-concurrent, complexly volatile platforms whose sweet spot is the real-time exchange of messages. Real-time entails that message consumers are able to keep up with producers. The worst nightmare of a message broker is to have fast producers and slow consumers, but regrettably, this is a very common situation.

When this happens, the broker must use its resources to buffer up messages somewhere in between, right? But resources aren't infinite, they are limited by nature. So the broker only has so much to play with.

Point #2: YOU are in the driver's seat: define global boundaries

Apache ActiveMQ allows you to be in total control of resource usage. Messaging systems quickly become the backbone of an enterprise, and thus they need to be highly predictable platforms. That's why it's good that you explicitly define the boundaries within which ActiveMQ runs, or else things will unravel in unpredictable manner.

Apache ActiveMQ understands two levels of limits: global limits (<systemUsage /> config element) and per-destination memory limits. This post will tackle the former, while in the next one we'll talk about the latter.

Global limits. There are three limits that govern the entirety of the AMQ instance's operations. Once you set them, ActiveMQ will watch that they are honoured:

  • Max. memory usage => the amount of RAM/virtual memory that your broker is entitled to use. It ultimately translates into the quota of Java heap your broker can use to buffer up messages. (<memoryUsage /> config element)
  • Max. persistence store usage => as per the JMS spec, PERSISTENT messages must be kept in durable storage until they are fully delivered and acknowledged by all interested consumers, so that they survive broker restarts and possible crashes. This global limit defines the maximum size allocation for that store. (<storeUsage /> config element)
  • Max. temporary storage usage => the maximum amount of disk usage the broker may use to temporarily keep NON-PERSISTENT messages around while consumers are unavailable to process them. This store DOES NOT survive broker restarts or crashes. It is simply used as a buffer for that broker session. (<tempUsage /> config element)

Stay tuned for subsequent blog posts

In the next blog posts, I'll tell you more about how to define per-destination limits in ActiveMQ, how ActiveMQ behaves when these limits are being approached and/or breached how Producer Flow Control kicks in), how the different message cursors affect the behaviour and what happens when PFC is disabled.

The Java Message Service (JMS) specification defines that all messages exchanged between a client and a broker must be acknowledged sooner or later. The acknowledgement serves as a token to signal transfer of responsibility between two parties, they exist in producer => broker and broker => consumer interactions.

When a producer sends a message to a broker/provider, the provider must ACK the receipt after it has honoured the persistence requirement. Likewise, when the broker dispatches a message to a consumer, the consumer must ACK the receipt at the point it decides to assume full responsibility for the processing. Brokers reserve their right to redeliver messages to consumers if they are not ACK'ed in time.

The moment when a message is ACK'ed can greatly affect performance, and whether it's done manually or automatically too. It's easy for ACKs to become a bottleneck.

In this article, I'll shed light on the different ACK modes that exist in Apache ActiveMQ from the consumer's perspective, and I'll x-ray how the client library behaves internally when using the different ACK modes.

JMS Acknowledgement modes

First things first, some background information. The JMS specification brings forward four acknowledgement modes of compulsory implementation for all JMS providers:

  • AUTO_ACKNOWLEDGE => the session automatically acknowledges a client’s receipt of a message when it has either successfully returned from a call to receive or the MessageListener it has called to process the message successfully returns (default ACK mode).
  • DUPS_OK_ACKNOWLEDGE => instructs the session to lazily acknowledge the delivery of messages.
  • CLIENT_ACKNOWLEDGE => client acknowledges a message by calling the message's acknowledge method. Acknowledging a consumed message automatically acknowledges the receipt of all messages that have been delivered by its session.
  • SESSION_TRANSACTED => a fictional value returned from Session.getAcknowledgeMode if the session is transacted. Cannot be set explicitly.

These values are defined as constants in the javax.jms.Session class.

ActiveMQ Enhancements

On top of those, Apache ActiveMQ defines 2 further acknowledgement behaviours:

  • ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE => Special version of the CLIENT_​ACKNOWLEDGE mode which allows to cherry-pick messages to ACK.
    • Message.ack() only acks an individual message, as opposed to acting as a cumulative ack.
  • Optimized Acknowledgements => batches up the transmission of the ACK to the broker until 65% of the prefetch buffer has been processed or until a timeout elapses, whatever happens first.
    • Enable this feature in the client's Connection Factory URI, optimizeAcknowledge option.
    • Also configure the timeout period in the Connection Factory URI, optimizeAcknowledgeTimeOut option (in milliseconds).
    • Only makes sense to activate alongside an automatic ACK mode, i.e. AUTO_ACKNOWLEDGE or DUPS_OK_ACKNOWLEDGE.

The rules of the game

Given so many options and descriptions, how does it all work in real life? These are the rules of the game in release 5.6.0 of Apache ActiveMQ:

  • For QUEUES, there is no difference between AUTO_ACKNOWLEDGE and DUPS_OK_ACKNOWLEDGE.
    • Even though the JMS spec allows for lazy acks with DUPS_OK_ACKNOWLEDGE, the AMQ client library doesn't leverage possibility and acks each message one by one in both modes, unless Optimized Acknowledgements are enabled.
  • For TOPICS:
    • DUPS_OK_ACKNOWLEDGE batches up acknowledgements until 50% worth of the prefetch buffer has been processed, at which point one cumulative ack is sent back.
    • Conversely, AUTO_ACKNOWLEDGE acknowledges messages individually.
  • When you enable Optimized Acknowledgements, the enhancement only really comes into play in one of two cases:
    • if AUTO_ACKNOWLEDGE is enabled, regardless if the destination is a queue or a topic.
    • if DUPS_OK_ACKNOWLEDGE is enabled, only if the destination is a queue.
    In all other cases, the flag is ignored.
  • When Optimized Acknowledgements are in effect, the client will send back cumulative acks at whichever event happens first:
    • 65% of the prefetch buffer has been processed
    • the optimized acknowledgements timeout period has elapsed.
  • The "DUPS_OK_ACKNOWLEDGE with topics" scenario sees no alteration with Optimized Acknowledgements, and the watermark continues to be 50%.
  • With INDIVIDUAL_ACKNOWLEDGE, each message you ACK will be ack'ed to the broker specifically in a cherry-picked manner.
  • Conversely, with CLIENT_ACKNOWLEDGE, whenever you ACK a message the client library sends back a cumulative ACK for that message AND all previously delivered but unack'ed messages, inside one single command.

I hope this post was useful to dissipate much of the fog on how acknowledgements, and as usual, if you have further questions, feel free to comment below or to contact me on Twitter: @raulvk.

I just posted this note on my social networks. My intention was to raise awareness amongst my non-techie friends about the milestone that we're living today.

I'm amazed by how the big players have come together and agreed on a common date in order to activate a technology poised to become a key enabler of further technical marvels. So here it goes. I hope you enjoy it and share this post your non-techie friends too.

Today is June 6th 2012.

We are witnessing a MASSIVE day for humanity and technology. You may not realise the relevance of it, but today IPv6 is going live in The World. IPv6 stands for Internet Protocol Version 6.

A range of powerful, authoritative Internet companies are flicking the gigantic, virtual switch from IPv4 to IPv6 today.

For non-technical readers: Don't panic, YOU certainly won't even notice a thing. If YOU do, it means that WE - Computer Scientists - have done something awfully wrong ;)

IPv6 is a synonym for "a new, larger version of the Internet". IPv6 means being more connected that ever. Even more than we are today. Can you just imagine?

This launch will permeate every single device around you, be it your smartphone, your fridge, your TV, your watch. I'm not even considering laptops and desktops. Thanks to this step, they will ultimately be able to interact with anything else out there, as truly unique, independent and autonomous entities.

It means that each single gadget/device will suddenly become addressable and accessible. Indeed, a big limit to continue evolving technical marvels is literally vanishing today. The FUTURE begins today.

Having said this, believe or not, the real beauty of today lies in what happened in THE PAST. Between 1973 and 1983, the Internet was being architected. It originally sprung off an an "experiment", and no one ever imagined we'd get this far.

In that experiment, only 4.3 billion terminations were allowed in the Internet. That number was proven insufficient after a few years.

Transitioning from IPv4 to IPv6 allows for 360,000,000,000,000,000,
000,000,000,000,000,000,000 devices to connect to the Internet. Finally, a little more elbow room to grow.

That's why, on a day like today, Computer Scientists like me rejoice and revel in reflecting, in hindsight, about how far we've reached with our inventions and creativeness. We imagine the impossible, and then we go and build it.

So, what's the next step? It's just a matter of time... Just wait a minute until WE - Computer Scientists - get around to exceeding that limit again. It won't be long ;)

Here's a video from Vint Cerf, one of the Fathers of the Internet, explaining the entire story. Please take a minute to watch it, it's really worth it.



Thanks for supporting our mission to change the way you live, communicate, get entertained, relax, procrastinate, read the news, share your memories, get emotional, joke.

Yours faithfully,
The Computer Scientists of The World.

Some time ago I submitted an Apache Camel component to interact with MongoDB databases. It was quickly committed to trunk, and I'm glad to announce that it will officially see the light of day with the Camel 2.10 release, which is just around the corner! So I thought now is a good time to advertise to the world the virtues and powers of this component.

Data explosion!

Data explosion. A term that refers to the unstoppable growth of data in the virtual world on a per-millisecond basis.

Whether it's published by humans or by objects (think the Internet of Things), it doesn't matter, it's still data that can be turned into information to gain further intelligence and insight. According to IDC's Digital Universe study, published just one year ago:

In 2011 alone, 1.8 zettabytes (or 1.8 trillion gigabytes) of data will be created, the equivalent to every U.S. citizen writing 3 tweets per minute for 26,976 years. And over the next decade, the number of servers managing the world's data stores will grow by ten times.

Mobile devices, smartphones, tablets, etc. are highly responsible for this data tornado. Before, we had to wait to get home to read the online paper, blog or our emails. Now we do the exact same thing from literally anywhere. We're immersed in a culture of "I want it, and I want it now". Thousands of new apps being launched everyday, each of them producing hordes of data. It's intense.

To support these new orders of magnitude, technology is evolving at a rapid rate under the terms of Big Data, Elastic Cloud, Virtualisation, Platform As A Service, and even Green Computing to make this whole new level of infrastructure sustainable.

At the Apache Camel project, there's a lot of interest and uptake of Big Data and Cloud trends. Folks have committed an array of components to enable the intimate core/heart of your organisation, your Enterprise Service Bus, to interact directly with these technologies.

The MongoDB Camel component is one of them. So let's talk about what it offers YOU.

A MongoDB component for Camel

The technical name of the beast is camel-mongodb, and if you use Apache ServiceMix or Apache Karaf, you can simply install it as a feature, which will drag along the MongoDB Java driver (which is also ASL-licensed). It's designed from the ground up to be simple, lightweight and convenient.

It's capable of acting both as a producer and as a consumer endpoint. As a producer, it can invoke a number of operations on a MongoDB collection or database. As a consumer, it can "inhale" data from a collection into a Camel route, in a time-ordered fashion, with zero creation of beans and custom processors!

Moreover, bundled with the component are several type converters that plug into the Camel routing engine to automatically convert the payload to Mongo's DBObjects where necessary. So this little component does a lot of magic, for the sake of your sanity ;)

Additionally, its quality is guaranteed by over 25+ unit tests which execute with the Maven build iff (if and only if) you point the relevant properties file to a running MongoDB instance, either local or remote.

The official camel-mongodb documentation is already quite clear and detailed, so I won't bore you with technical details in this post. Instead, we'll take an eagle-eye view on the functionalities this component offers, both as a producer and as a consumer.

As a producer

The producer endpoint supports quite a few Mongo operations:

  • Query operations: findById, findOneByQuery, findAll, count
  • Write operations: insert, save, update
  • Delete operations: remove
  • Other operations: getDbStats, getColStats (to automate monitoring via Camel)

In total, 10 operations in its first version! All CRUD operations are covered, and even augmented with several variants when it comes to the Query side of things. All these operations map to MongoDB operations, so refer to their manual for any doubts.

So, how do you specify the operation a producer endpoint executes? You have two approaches:

  • statically, by specifying the operation name as an option on the endpoint URI
  • dynamically, by setting the CamelMongoDbOperation a header in the IN message

So in essence, you can have a multi-functional endpoint, or an endpoint that primarily deletes documents, but can also "moonlight" as a document inserter under specific circumstances (e.g. determined by a Content-Based Router EIP, Filter EIP, etc.). Useful, huh?

By the way, support for group and mapReduce queries is slated for future versions.

As a consumer: tailable cursor consumer endpoint

This is the feature I enjoyed coding the most ;) It allows you to pump data from a MongoDB collection into a Camel route, in real-time, just as documents are being appended to the collection.

In short words, a camel-mongodb consumer is able to bind to a capped collection so that the MongoDB server keeps pushing documents into the Camel route as they are being inserted it. For more information, refer to Tailable cursors.

Each record in the MongoDB collection gets pushed to the Camel route as an individual Exchange.

Persistent tail tracking is also a great feature of this component. It allows you to ensure that the consumer will pick up exactly where it left off after it comes back to life from a shutdown. To use this feature, you just need to specify an increasing correlation key, which can be a timestamp or any other MongoDB data type that supports comparisons (String, Dates, ObjectId, etc.).

But alas, when working with tailable cursors, MongoDB reserves the right to kill the cursor if data hasn't been available for a while, thus preventing it from wasting server resources. The camel-mongodb is aware of this behaviour and regenerates the cursor automatically. You can configure a delay via the cursorRegenerationDelay option.

Other remarkable features

MANY, many other features exist. Here are just a few:

  • Paging support via skip() and limit(). Values specified in message headers.
  • Supports upserts (atomic insert/update) and multiUpdates in update operations.
  • Query operations support field filtering (to only fetch specific fields from matching documents) and sorting.
  • Simple and extensible endpoint configuration, revolving around a org.mongodb.Mongo instance that you create in your Registry.
  • Database and collection to bind to are configurable as endpoint options, but can be dynamic for each Exchange processed (via Message Headers). In order to maximise throughput for scenarios where you won't be using this feature, you need to explicitly set dynamicity=true in the endpoint to advise the component to compute the DB/Collection for each incoming exchange.
  • Can reuse same Mongo instance for as many endpoints as you wish.
  • WriteConcern can be set at the endpoint level or at the Exchange level, using a standard one (see constant fields in MongoDB's WriteConcern Javadoc) or creating a custom one in your Registry.
  • Quickly instruct producer endpoints to call getLastError() after each write operation without setting a custom WriteConcern by using option invokeGetLastError=true.

How to go about using camel-mongodb in my Camel routes?

As I mentioned earlier, the official camel-mongodb documentation is very clear and verbose. Detailed enough to be a great starting point.

Additionally, you can also check out the unit tests. There are more than 25, and they illustrate most usage aspects of the component, both as a producer and as a consumer.

If you'd like me to write a post with concrete examples on how to use this component, please provide feedback in the comments and share this post on your social networks ;)

¿Cuántas veces te has encontrado sentado en una reunión con compañeros, partners, clientes, etc., esbozando los detalles de una nueva idea, o discutiendo soluciones para atajar algún problema, y de repente te invade el sentimiento de que no saldrá nada productivo del encuentro?

Esa misma intuición te lleva a perder el interés, desconectar y abstraerte. Tu mente entra en modo de indiferencia. Aparcas el hecho de que el encuentro pudiera ser crítico para la continuidad del proyecto, te resignas y te dices a ti mismo que al menos "lo has intentado".

Otra situación: personas creativas, competentes, inteligentes, especialistas en sus respectivos campos, etc., de cuya cooperación y sinergia debería surgir magia, pero en contra de toda lógica parece que el flujo de ideas arriba a un punto muerto.

¿Qué está pasando? ¿Por qué a veces nos cuesta pensar productivamente?

No te sorprenderá saber que nuestro cerebro está diseñado para memorizar patrones y aplicarlos tan pronto como reconozca ciertas situaciones.  A lo largo de tu vida, tu cerebro ha ido acostumbrándose a esquemas de pensamiento que en ciertas situaciones pueden dar resultados maravillosos, pero en otras pueden tener efecto nulo, sobre todo dependiendo de con qué otros “cerebros” estés interactuando.

Es la inercia del pensamiento: esa caja en la que a veces estamos atrapados, y que nos constriñe a la hora de hacer encajar un grupo. Tu actitud, personalidad y anteriores experiencias hacen que tu mente se encarcele en un proceso de pensamiento determinado.

Para romper con este proceso, hay veces que lo que necesitas es, sencillamente, un método/técnica para alinear a todas las cabezas pensantes y hacerlas encajar como un puzzle en un tren de pensamiento común.

Nuestras tendencias – la fijación del pensamiento

Todos tendemos a enfocar nuestros sentimientos de acuerdo con ciertas predisposiciones emocionales o racionales: algunos son más optimistas y ven oportunidades por todos lados (algunas veces irreales), otros son más negativos, emocionales, matemáticos, catastróficos, etc. ¿Qué tipo de pensador eres tú?

Nuestro patrón de razonamiento es causa y consecuencia de nuestra personalidad. Yo lo denomino "fijación del pensamiento", y nos marca a la hora de exponer en reuniones, foros, debates, brain stormings, etc. Es justamente por esta fijación por la que hay equipos que funcionan desde el inicio, con estupendo sentimiento de hacer click desde el minuto cero, pero, en cambio, otros necesitan de mayor estructura y un proceso claro, ordenado y dirigido de pensamiento común.

La pregunta clave es: ¿Cómo conseguimos liberar a cada persona de su ego e inercia, desbloqueando su potencial y conduciéndole a pensar en direcciones productivas? ¿Cómo lograr que cada persona aporte su máximo valor y experiencia, en lugar de involucrarnos en discusiones y rutas de pensamiento sin curso? 

En esta serie de posts, hablaré de una técnica de demostrada efectividad que presentó en 1985 Edward de Bono en su libro Seis sombreros para pensar. Edward de Bono es un médico maltés que se ha centrado en la psicología aplicada al pensamiento, y ha acuñado términos como pensamiento lateral u operacy. Ha publicado más de 40 obras, es un autor aclamado y sus técnicas cuentan con reconocido prestigio.

Seis colores, seis perspectivas: los seis sombreros

En esta técnica, el sombrero representa un objeto que metamorfosea nuestra manera de pensar inmediatamente, moldeando la mente para conducirnos a pensar con ciertas actitudes y predisposiciones, por ejemplo: objetivamente, emocionalmente, de manera optimista, pesimista, etc.

Cada sombrero tiene un color que casa perfectamente con su filosofía y actitud. Los colores son: rojoverdeamarilloblanconegro y azul.

Muchos usuarios deciden comprar sombreros reales para anclar la acción mediante un objeto físico, y así conseguir mayor efectividad. Otros diseñan tarjetas de colores, fichas, bolígrafos, etc. En cualquier caso, es disponer de objetos físicos de colores que se puedan intercambiar rápidamente, pues el propio color del sombrero ayuda a evocar el sentimiento por asociación mental.

En el siguiente post de esta serie, descubriremos el significado de cada sombrero de color, y en posts siguientes expondremos distintas técnicas para aplicar este concepto de manera efectiva en nuestro día a día, tanto a nivel colectivo como a nivel individual (para disparar nuestra creatividad).


Most Enterprise Architects will want to apply common cross-cutting concerns to the Services in their corporation. Imagine logging, security, reliability, QoS policies, etc. [By the way, let's drop the word "Web" from Web Services when we speak about Apache CXF, as it's capable of doing so much more than just plain HTTP].

With most frameworks you'd typically end up creating lots of boilerplate code and documenting heavy guidelines so that developers know what rules to adhere to. The results are hefty deployment artefacts, developer productivity loss and a bunch problems whenever you need to change the common logic (because you may have several spots in code where to replicate that change!).

If you are using Apache CXF (or the enterprise version of Apache CXF) and OSGi, there's very good news for you! You have a powerful combination in your hands to apply magic behind the curtains on-the-fly every time a service is provisioned and de-provisioned in your OSGi environment!

In this post I will show you how to uncover this magic.

The use case


To illustrate the example, I'll take the Logging Feature from Apache CXF and establish a policy in my OSGi container so that the feature is automatically applied to all CXF Buses that get registered. Some background concepts:

  • CXF Feature: A feature in CXF is simply a grouping of CXF Interceptors. The LoggingFeature adds the LoggingInInterceptor and LoggingOutInterceptor to the interceptor chains in the bus.
  • CXF Bus: according to the CXF docs: "The Bus is the backbone of CXF architecture. It manages extensions and acts as an interceptor provider. The interceptors for the bus will be added to the respective inbound and outbound message and fault interceptor chains for all client and server endpoints created on the bus (in its context)."
  • CXF Bus Registration: Every time a CXF bus is registered, by default CXF exports it as an OSGi Service to the OSGi Service Registry.
  • OSGi Service: an object that a bundle makes available to any other bundle in the OSGi container.

The solution


The solution is very simple and requires no advanced knowledge about OSGi nor digging into internals at all. All we need to do is create a new OSGi bundle that "listens" to new service registrations matching the org.apache.cxf.Bus interface.
With this solution, whenever a new CXF Bus comes to life via Apache CXF in the OSGi container, it will immediately get enriched to log all incoming requests and outgoing responses.
The "enricher" will be a standard Java class with a single method that takes a CXF Bus as a parameter. As you can see, this class is completely ignorant of OSGi:

public class CXFBusListener {
    private final static Logger LOG = LoggerFactory.getLogger(CXFBusListener.class);
    public void busRegistered(Bus bus) {
        LOG.info("Adding LoggingFeature interceptor on bus: " + bus);
        LoggingFeature lf = new LoggingFeature();
        // initialise the feature on the bus, which will add the interceptors
        lf.initialize(bus);
        LOG.info("Successfully added LoggingFeature interceptor on bus: " + bus);
    }
}

Now we'll ask OSGi to call us whenever a new service with interface org.apache.cxf.Bus appears in the container. I'll use OSGi Blueprint for my IoC needs, so I create a file in OSGI-INF/blueprint with the following definition:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" 
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
 xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

 <reference-list id="busListener" interface="org.apache.cxf.Bus" availability="optional">
  <reference-listener bind-method="busRegistered">
   <bean class="org.fusesource.examples.CXFBusListener" />
  </reference-listener>
 </reference-list>

</blueprint>

And we're good to go!

Now install the bundle (make sure that the start level is lower than the CXF Services you will register later) and watch the log file for the log statements, as soon as you register new services (or if you have any existing ones already):

        14:53:31,393 | INFO  | ExtenderThread-6 | CXFBusListener | 167 - cxf-transparent-interceptors - 0.0.1.SNAPSHOT | Adding LoggingFeature interceptor on bus: org.apache.cxf.bus.spring.SpringBus@2bf35e0a
        14:53:31,397 | INFO  | ExtenderThread-6 | CXFBusListener | 167 - cxf-transparent-interceptors - 0.0.1.SNAPSHOT | Successfully added LoggingFeature
        interceptor on bus: org.apache.cxf.bus.spring.SpringBus@2bf35e0a 
When you send a request to the Service, you will see the logging interceptors being "magically" applied even though you did not define them explicitly in the service:
--------------------------------------
14:56:59,732 | INFO  | tp1597033892-218 | LoggingInInterceptor             |  -  -  | Inbound Message
----------------------------
ID: 2
Address: http://localhost:8181/cxf/HelloWorld
Encoding: ISO-8859-1
Http-Method: POST
Content-Type: text/xml
Headers: {Accept=[*/*], Content-Length=[236], content-type=[text/xml], Host=[localhost:8181], User-Agent=[curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5]}
Payload: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:cxf="http://cxf.examples.servicemix.apache.org/"><soapenv:Header/><soapenv:Body><cxf:sayHi><arg0>Raul</arg0></cxf:sayHi></soapenv:Body></soapenv:Envelope>
--------------------------------------
14:56:59,733 | INFO  | tp1597033892-218 | LoggingOutInterceptor            |  -  -  | Outbound Message
---------------------------
ID: 2
Encoding: ISO-8859-1
Content-Type: text/xml
Headers: {}
Payload: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:sayHiResponse xmlns:ns2="http://cxf.examples.servicemix.apache.org/"><return>Hello Raul</return></ns2:sayHiResponse></soap:Body></soap:Envelope>
--------------------------------------
        

And voilà!

Source code available here: https://github.com/raulk/cxf-transparent-interceptors

[NOTE - As Dan Kulp indicated: "If using CXF 2.6.x, it can be even easier. By default with CXF 2.6.x, when a Bus is created by OSGi, it looks in the Service registry for any service that implements org.apache.cxf.feature.Feature and will automatically apply those to the bus. Thus, you don't need the CXFBusListener at all. Just register your feature as a service."]

Category

Category

Category

Category