Sunday, August 28, 2011

Coherence Random Questions?

What will happen if the eviction policy dictates that we evict an entry from the backing map that is still in the write-behind queue (and therefore has not been flushed to the database)?

In this situation, the read-write backing map will synchronously invoke the store operation on any entries about to be evicted. The implication here is that the client thread performing the put operation will be blocked while evicted entries are flushed to the database. It is an unfortunate side effect for the client thread, as its operation will experience a higher than expected latency, but it acts as a necessary throttle to avoid losing data. This edge condition highlights the necessity to configure a worker thread pool even for caches that are strictly performing write behind in order to prevent this flush from occurring on the service thread. It is important to keep in mind that the store operation will not always necessarily be performed by the write-behind thread. Note that this can also occur with caches that have expiry configured. The likelihood of this occurring will decrease if there is a large difference between expiry time and write-behind time.

What are service threads and how is the PUT operation managed in the Coherence Grid?

Each clustered service in Coherence is represented by a service thread at each JVM participating in the cluster. This thread is responsible for communicating with other nodes and providing the functionality exposed via the NamedCache API along with system level functionality such as life-cycle, distribution and fail-over. As a rule, all communications between the service threads are done in an asynchronous (non-blocking) mode, allowing for a minimum processing latency at this tier. On the other hand, the client functionality (for example a NamedCache.put call) is quite often implemented using a synchronous (blocking) approach using the internal poll API. Naturally, this poll API is not allowed to be used by the service thread, since this could lead to a high latency at best and deadlock at worst. When a listener is added to a local map that is used as a primary storage for the partitioned cache service, the events that such a listener receives are sent synchronously on that same service thread. It is discouraged to perform any operations that have a potential of blocking during such an event processing. A best practice is to queue the event and process it asynchronously on a different thread.

2 comments:

electronic signatures said...

Great ! The code logic that I applied to check this point is bit complex from the one you have posted. I am highly convinced with the solution that you have suggested to check if a value exists in a list. Thanks for sharing.

Blogger said...

Did you know that you can make cash by locking special pages of your blog or site?
All you need to do is to open an account with AdscendMedia and run their content locking plugin.

Search This Blog