|
Basic Blocking Lookup Using the Service Discovery Manager
Version 1.00 May 14, 2004 |
|
The previous example, BasicLookup, can be
made more efficient by using the service discovery manager's blocking
lookup functionality. BasicLookup executes an explicit
delay just before performing the lookup query. This delay in example 1
was necessary in order to provide some time for the lookup discovery
process to take place. The problem with this approach is that the
program always waits the entire delay period, even if a service of
interest becomes available during the delay period.
There are two approaches that can be used to help mitigate this waiting period: polling and events. The first approach is to periodically poll the service discovery manager by wrapping the lookup query in a loop (an attempt-based loop, a time-based loop, or a combination of both). This approach potentially finds a service of interest sooner, but at the expense of using more local processing cycles. The second approach is to use the service discovery manager's blocking semantics, which take advantage of the lookup service's event mechanism to provide notification when services of interest become available. This approach (which is used in this example) avoids "polling" cycles but introduces some additional client-side responsibilities.
Example 2 demonstrates the service discovery manager's blocking
lookup functionality. Please review the code in the file
BasicBlockingLookup.java. A detailed explanation of that
code follows.
The source code for this
example is essentially the same as that shown in Example 1. The only
differences occur in Section 3 (see the bold-faced code) where the
overloaded lookup method that takes a
time-out parameter is used.
The implementation differences for this type of lookup are much
greater than the source code differences would initially imply,
though. First, this blocking version of lookup
effectively calls the non-blocking version of lookup
(shown previously) to see if a matching service is currently
available. If one is found, the method returns the matching
ServiceItem without blocking, just like the previous
example. On the other hand, if a matching service is not found, the
service discovery manager employs the lookup service's event
notification mechanism. That is, the service discovery manager
attempts to register a RemoteEventListener object with
all the lookup services in its managed set and then waits for a
"service-match" event to occur before the time-out period expires. In
addition, any newly discovered lookup service (discovered before the
time-out period expires) is also queried for a matching service and
is registered with for service-match events if no matching service is
currently available. If a service-match event is received before the
time-out expires, the associated ServiceItem is
returned. Otherwise, null is returned.
Note that the use of lookup service event notifications requires that
at least one RemoteEventListener object be exported to
handle event notifications. This also means it is the client's
responsibility to ensure that a lookup service can access the
necessary code for this exported RemoteEventListener
object. Typically, this is done by setting the
java.rmi.server.codebase property on the lookup service
client application (the service discovery manager in this case) to a
URL that references a JAR file containing the necessary class files.
(There are other approaches, but this is the one taken for this
example.) The JAR file sdm-dl.jar was created for this
purpose. This JAR may be found in the starter kit's lib
directory.
Also, note that in this case the terminate method is
actually required for the program to exit. Calling
terminate allows the service discovery manager to
release any obtained resources (exported objects, threads, etc.)
which can potentially keep this application running even after the
main method returns.
The example can be run on a UNIX platform by doing the following:
$ cd bin12 (or bin20 for Jini 2.0 environments) $ run_basic_blocking_lookup.sh
The example can be run on a Windows platform by doing the following:
$ cd bat12 (or bat20 for Jini 2.0 environments) $ run_basic_blocking_lookup.bat
Assuming there is at least one publicly available lookup service and one publicly available transaction manager service, you should see output similar to the following:
pion 102 => run_basic_blocking_lookup.sh + hostname CODEBASEHOST=pion + . JINI_HOME.sh EXJINIHOME=/files/jini1_2_1 + java -Djava.security.policy=../policy/policy.all -Djava.rmi.server.codebase=http://pion:8081/sdm-dl.jar -jar ../lib12/BasicBlockingLookup.jar Creating ServiceDiscoveryManager ... Creating ServiceTemplate for a net.jini.core.transaction.server.TransactionManager instance Attempting service lookup for a net.jini.core.transaction.server.TransactionManager instance TransactionManager found, id=8852573c-fb09-4119-a7e9-e63ca90073fc
If not, then please refer to Appendix A for troubleshooting advice. Appendix A also contains another non-blocking lookup example that returns multiple service references.
Examples 1 and 2 demonstrate basic techniques for finding a service of interest that work relatively well when the service reference is not needed for a prolonged period of time. On the other hand, a robust application that needs to maintain highly available service references requires more work--work that the service discovery manager caching mechanism aims to alleviate.
For example, a robust application needs to handle the possibility of
a RemoteException occurring during a remote invocation
due to a service, node, or network crash. A simple approach is to
re-attempt the lookup process for another service instance after
receiving a RemoteException. Example 1 highlights the
fact that the lookup service discovery process is not instantaneous.
Add to that the cost of having to requery each lookup service (each
query resulting in a remote call) for another service of interest,
and this service "rediscovery" process becomes relatively expensive.
A less expensive and more responsive approach is to cache multiple service references, as they are discovered, for as long as that service is needed. The service discovery costs are then amortized over the application's lifetime and queries for new references become local calls on the cache instead of remote calls to lookup services.
The service discovery manager provides just such a caching scheme
through its LookupCache capability. The
LookupCache provides an optimized caching
strategy that, among other things, manages:
Subsequent code examples cover the these topics in detail.
SDM examples main page