Showing posts with label OSGi. Show all posts
Showing posts with label OSGi. Show all posts

May 31, 2017

Java 9 delayed... again

As expected, after the down vote of the expert group, jigsaw and Java 9 are delayed once more. This time it's a 8 week delay
http://mail.openjdk.java.net/pipermail/jdk9-dev/2017-May/005864.html

Will the expert group reach an agreement? I am not very optimistic, but would be happy to be proven wrong.

Apr 17, 2017

Java 9 Jigsaw and OSGi

On Google+ Mark Derricut wrote about a post by RedHat's Scott Stark about Jigsaw, the Java 9 module system, as well as a follow up by Mike Hearn.

I have been following Jigsaw for some years, and I found both posts very interesting. Scott's post highlights some worrying problems in the current Jigsaw implementation. I was already aware of some of them as I read some earlier posts by Neil Bartlett, saw his presentation at EclipseCon Europe 2016, and I have been following Thomas Watson OSGi-Jigsaw interoperability experiments.

In his follow-up post Mike says to lean more toward Jigsaw because OSGi is complicated, poorly documented, uses a crazy versioning approach, and is not that well known. He also said that he'd rather prefer the module declaration to be a source file compiled to byte-code rather than a XML or textual file, and highlights  jlink (which allows you to create a statically linked distribution of your application that includes the JRE) as an example of good tools that the JDK will provide on top of Jigsaw. Mike says the RedHat post is not fair, that the authors think “we were here first, so what we did is best practice and you should do it too”, and that Oracle dropped many aspects of other module systems such as OSGi or JBoss Modules  to avoid their complexity and reduce the risk.

I personally believe that RedHat builds amongst the most sophisticated products on the market. We must recognize that their people are knowledgeable, their software is robust and runs thousands of applications in production.  Writing modular software is certainly more complex than writing monolithic applications. By making Application servers and middleware solutions, companies like RedHat and IBM have to deal with this problem more than any one else. They also have a very large client base, therefore they should be aware of the Java enterprise landscape more than anyone else.  Hence, it's my opinion that when RedHat engineers express concerns about the ability of Jigsaw to fulfill today requirements for modularity we should listen to them very carefully. Besides, RedHat JBoss does not officially support OSGi, so they are not really trying to advocate for it; I believe they are simply trying to highlight the issues in Jigsaw. Scott's post includes contributions by Maven committers, and I trust their opinion as Maven is not tied to OGSi or JBoss Modules. Like it or not, Maven is the standard for building Java, therefore I am very worried by the long list of issues between Jigsaw and Maven and if I was Oracle I would have paid more attention to Jigsaw-Maven compatibility.

I also believe that it should be clear to everyone that in recent years Oracle has been working a lot on JavaSE without caring that much about JavaEE, still stuck at version 7. I am afraid we will need to wait ages for JavaEE 9 or that it may happen that JavaEE 9 will be skipped and we will have JavaEE 10 when Jigsaw will have matured in JavaSE 10.
Long story short: future does not look bright for JavaEE developers, who historically always had to use older versions of Java for several years and now it may be even worse.

Is modularity only important for vendors like RedHat or IBM? Of course not. Enterprise applications become more and more complex year after year. For example, it's now very common to have to support native mobile applications as clients, to have to provide API for third parties, to have to provide features such as social integration (Facebook, Twitter, LinkedIn) and collaboration (e.g. chat and video conference capabilities), and many more.
If you are a vendor with a large functional offering there are many chances that not all your clients will want to buy everything, hence you need to modularize your offering to be able to license to your clients and deploy nothing more than what they want and need.

I am not affiliated with RedHat, OSGi but I would like to comment to some of Mike statements.

First few words about myself. I have been coding in Java since 1999, professionally since 2001. I spent the first few years of my career on Java Enterprise, then moved to Eclipse plug-in development for 7 years or so, then back to Java enterprise.  In 2001 I was deploying Java Enterprise applications on IBM WebSphere 3, by copying all Jars to some folder on the server file system and by specifying the application class-path in a 20 column (yet scrollable!) text field of the Swing Gui used for administering WebSphere (EAR files were not supported at that time).

Mike' statement #1: OSGi is complicated
When working with large JavaEE applications, the so-called Java "Jar hell" has always been a major issue to me.  It's when I started coding for Eclipse 3 that I met OSGi for the first time, and I can tell you that I can not imagine a platform like Eclipse to succeed without a strong modular system.  Eclipse plug-ins are "simplified" OSGi bundles: most of them don't use
the OSGi registry (or at least do not require downstream plug-ins to use it), they express dependencies in term of plug-in names rather than packages, and the Equinox OSGi container runs in a more "relaxed" way than strict OSGi. Programming for the Eclipse platform told me that OSGi is not that complex, you can code your first Eclipse plug-in in minutes, and modularity really helps in large applications.

When I moved back to JavaEE I tried to run OSGi on the server. The specification addresses some complex concerns such as dynamicity and the possibility that bundles are installed/uninstalled
on a live application. This requirement is covered by API such as BundleListener and ServiceTracker which certainly add complexity. However most applications will not need that dynamism. Eclipse for example does not allow users to replace individual bundles and always requires a restart after installing/updating some plug-ins. IBM WebSphere Liberty, a JavaEE and OSGi Application Server, does not allow you to hot-swap individual bundles of your OSGi application, which can only be started/stopped as a whole.
Hot-swap makes a lot of sense for IoT or embedded systems, but is not strictly necessary for Enterprise Applications. So if you don't need that level of dynamism most of the perceived OSGi complexity disappears and OSGi becomes just a module system, offering API on top of Java that you may want to use or not. For example, if you like Spring you can get Eclipse Virgo for Tomcat Server and develop your truly modular, Spring based, application with very limited knowledge of OSGi itself (disclaimer: I am a Virgo committer). But you will still get an OSGi environment that enforces true modularity (and can support the most complex scenarios in case you need them).

If you compare it with JavaEE, the OSGi spec is smaller and in my opinion better written. If you started with JavaEE several years ago and saw it growing then you learned JAXB, JAX-RS, JPA gradually. But if you start today as a new developer, JavaEE has a learning curve which is as complex as OSGi. That's why frameworks like Spring are so popular.

Mike's statement #2: OSGi is poorly documented
This may be the case. For plain Java you go to the Oracle website and start from there. For OSGi, information is scattered over the web and it's difficult to pick the right one for starting. The OSGi Alliance web site is not that good, although they are trying to fix that with project enRoute.

Mike's statement #3: OSGi is not that successful
A lot of products out there are built on OSGi. Let me give some examples:

  • Pretty much all IBM Java products (Eclipse, IBM Notes, WebSphere, the Rational suite). I read somewhere that IBM has 100+ products implemented in OSGi. Again, my opinion is that if IBM selected it there must be some good value in OSGi.
  • If I am not mistaken, Sonatype Nexus, the Maven repository, is an OSGi application.
  • Liferay is an OSGi application.
  • At least some WSO2 products are OSGi based (Carbon)
  • Glassfish/Payara is OSGi based
  • If I remember correcty, Oracle WebLogic is built on OSGi
  • Most of the popular Java libraries are also OSGi bundles, for example: Google Guava, Google Guice, Google Gson, most of the Apache libraries, Jackson, Slf4j, EclipseLink and Jersey by Oracle and many many more. I doubt all these projects had OSGi metadata out of the box if no one was asking for them.


Mike's statement #4: OSGi’s approach to versioning is really crazy
Mike complains that OSGi requires you to give a version not only to the module itself but also to the exported Java packages. Having worked on Eclipse plug-ins first and later on pure OSGi I think the OSGi approach makes a lot of sense. Eclipse plug-ins depend on module names. This is problematic when the code base is refactored and a large plug-in is split in two: your dependency is impacted and you need to change the name of the plug-in you depend upon or add another one. That's why Eclipse contains "compatibility" bundles that re-aggregate and re-export as a whole modules that have been split. Depending on packages makes much more sense, your dependency declaration is not affected even if the maintaners split or rename their bundles. Would you depend on javax.xml.parsers or would you depend on Apache Xerces? Would you depend on the Servlet spec or would you want your code to depend on Tomcat implementation? 

Mike's statement #5: module declaration should be a source file
I agree with Mike that the module declaration should be natively supported by the compiler. I believe this is also what the OSGi Alliance would have wanted if they had the ability to change the compiler. Module declarations as source files will allow the compiler to check for errors at compile-time. At the same time I also agree with Scott that the module declaration should be readable to diagnose problem at run-time. Therefore I agree it should not be compiled to byte-code, but rather kept "as is" to be readable, or maybe compiled at run-time, or maybe compiled in a readable format, such as a MANIFEST.MF file or a XML file.

Mike's statement #6: Oracle will build better tools on top of Jigsaw
It's true that today to code for OSGi or JBoss Modules you have to rely on IDEs and third party tools as those module systems are not supported natively by the JDK. So it's clear that the situation will improve when the the Java language will provide modularity out of the box. However this point is a bit stretched in my opinion, because this could have been the case also if Oracle decided to build a different Jigsaw, or to adopt OSGi or JBoss Modules. Plus, I would not bring jlink as an example of a great improvement because that tool will most likely be of little interest to Java enterprise developers, and in my opinion it's in Java Enterprise where modularity is most important. The Eclipse platform of course is not JavaEE but despite being a desktop application, it's nature makes it more similar to a container than a JavaSE application.

Conclusions
To conclude, in my opinion modularity in software is inherently complex. Module systems need to support that complexity and therefore will be complex as well. OSGi not only addresses modularity but also dynamism which adds additional levels of complexity. You can however decide to ignore the dynamic part of OSGi, like the Eclipse platform does, and simply take advantage of it's support for modularity. Jigsaw could do the same, be a subset of OSGi and provide support for modularity. I understand that Oracle is having troubles making the JDK modular, and that Jigsaw is conservative to reduce risks. However it's probably a bit too limited as a modular system. Let's hope this is the first iteration, let's hope it will be adopted by developers, and Oracle will quickly release a 2.0 version that will be feature-wise closer to OSGi or JBoss Modules. Meanwhile, Maven will probably play a big role in making Jigsaw successful or not.

Mar 10, 2017

Virgo 3.7.0 Released

 Today Florian announced the availability of Virgo 3.7.0. 

Virgo 3.7.0 brings several improvements (most notably Tomcat 8.5 and Spring Framework 4.2) and is paired by release 1.5.0 of the Virgo Tools, the Eclipse IDE integration.

This is the first official Virgo release since I joined the project as a committer, and it's also my first open source release in general. I am very happy and very proud of what we achieved.

Download Virgo and get Virgo Tools from the update site!

Jan 5, 2017

Virgo Tools 1.5

Since September 2015 I am a committer of the open source Eclipse Virgo project.

I have written several posts in the past about the Eclipse Virgo OSGi application server, and after having helped the project team with patches and bug fixes I have been invited by Florian, the project lead, to join the team as a committer.

I am an individual committer, contributing to the project in my spare time. I spent almost all of my effort on the Virgo Tools, the Eclipse plug-ins that integrate Virgo as a test environment in Eclipse. The most notable improvements of the upcoming version 1.5 of the Virgo tools will be:
  1. Support for PDE Plug-in projects. This is the possibility to develop for the Virgo runtime using the Eclipse PDE Tools, Plug-in Development Environment. This was one of the most requested and long standing missing features. See here and here.
  2. Improved support for plan projects. Virgo supports deployment of a number of artifacts, including Plans. Plans are XML files listing Bundles and other Plans to be activated. The Virgo Tools now provide fair support for having Plan files in Eclipse and deploying them to the test environment.
  3. Bug fixes and documentation
  4. Compatibility fixes for Eclipse Neon
We are currently planning to release the Virgo Tools 1.5 in January 2017 to be shortly followed by Eclipse Virgo 3.7.

Stay tuned!


Oct 22, 2014

Profiling Virgo with JProfiler

The information below has been tested with JProfile 5 on Linux 64bit, and assumes that the JProfiler Agent is installed in folder /opt/jprofiler5.
While this post refers to a rather old JProfiler version, I am sure the principles apply also to more recent JProfiler version. 
 
The following environment variables must be set before starting Virgo:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/jprofiler5/bin/linux-x64
export JAVA_OPTS="$JAVA_OPTS -Xbootclasspath/a:/opt/jprofiler5/bin/agent.jar -agentpath:/opt/jprofiler5/bin/linux-x64/libjprofilerti.so=port=8849" 
If your JProfiler installation is not located in /opt/jprofiler5 change the above paths accordingly.

A convenient method for setting the environment variables consists in creating a setenv.sh file in the Virgo bin folder. If such file exists, it will be invoked by the startup.sh script.
 
Additionally, it is necessary to modify the java6-server.profile file contained in the <VIRGO_HOME>/configuration folder to ensure that the property org.osgi.framework.bootdelegation contains the com.jprofiler package, as in the example below.
Note: do not replace the file content, just add the com.jprofile.*,\ line.
org.osgi.framework.bootdelegation = \
 org.eclipse.virgo.osgi.extensions.*,\
 org.eclipse.virgo.osgi.launcher.*,\
 org.eclipse.virgo.kernel.authentication,\
 com.sun.*,\
 com.jprofiler.*,\
 javax.xml.*,\
 org.apache.xerces.jaxp.*,\
 org.w3c.*,\
 org.xml.*,\
 sun.*
After applying the above changes, start Virgo and connect using the JProfiler GUI.

Aug 30, 2014

How to use Hyperic to monitor Eclipse Virgo 3.6

Few days ago I started looking for a monitoring tool to track the health status of our Virgo Server for Apache Tomcat instances.

As I could not find anything Virgo specific, I looked for an open source tool that supported Apache Tomcat 7.0.

First try was Moskito, but it does not seem to support the  OSGi-aware Tomcat that is embedded in Virgo. In any case I did not like the deployment approach, which requires to add Mosquito to every monitored Web App.

So I searched for monitoring tools that used Tomcat JMX support, as I believed such tools should be unaffected by the OSGi runtime, and I found Hyperic.

Hyperic 5.8 

Hyperic is developed by SpringSource and there exists both an enterprise distribution (vCenter Hyperic) and a free-to-use distribution.

Hyperic consists of the Hyperic Server and the Hyperic Agent.

The server is a Tomcat6-based Web application that uses PostgreSQL for storing historical data and which provides the monitoring console.

The agent is a stand-alone Java application that integrates with the O.S. via the well known Sigar native library (used by JMeter, Elasticsearch etc), and which features a number of plug-ins for different servers and database systems (WebSphere, WebLogic, JBoss, Tomcat, Apache, Oracle, DB/2,  PostgreSQL, MySQL and many more).

A screenshot taken from the SourceForge download page

Installation and usage

You must first install the Hyperic Server, typically on a dedicated machine that you will be using for monitoring. Then you must install the Hyperic Agent on every server machine that you want to monitor.

The Hyperic Agent will automatically discover running database systems and application servers on the machine where it is installed, and it will offer them as observable resources to the Hyperic Server.

From the Hyperic Server console the administrator can select what to monitor (e.g. Apache Tomcat, MySQL, but also operating system CPU, SWAP, file system, I/O, network etc)

The agent will then start sampling the requested items and will push the information to the console. which will be able to display historical data, paint graphs, and which can be configured to raise alerts when certain conditions are met (e.g. low disk space, high CPU usage, Tomcat process crashed etc etc).

Tweaking the Virgo installation to pretend it's Tomcat 7

Even if Virgo was created by SpringSource, Hyperic does not seem to support Virgo OOTB, or at least the open source edition does not seem to support Virgo.

It however supports Tomcat 7.0 via an agent plug-in called tomcat-plugin.jar.
A quick look at the plug-in sources reveals the Tomcat 7.0 discovery process:
  1. The plug-in uses a Sigar query to identify running processes that are potential Tomcat servers. The query just looks for running Java processes that include parameter -Dcataliba.base=<some_path> in the command line.
  2. The version of Tomcat is determined by checking the existence of some files that are specific to each Tomcat version. For Tomcat 7, the file is <catalina.base>/lib/tomcat-api.jar (see here).
  3. Once Tomcat is identified, the agent will connect to the process to collect samples, provided that JMX support is enabled. Luckily enough JMX is enabled per default in Virgo.
To have Virgo recognized as Tomcat 7.0:
  1. Set JAVA_OPTS to -Dcatalina.base=<VIRGO_INSTALLATION_FOLDER> before starting Virgo:
    export JAVA_OPTS=-Dcatalina.base=/home/giamma/virgo
    ./startup.sh
  2.  Create an empty file named tomcat-api.jar in the Virgo lib folder:
    cd /home/giamma/virgo 
    touch lib/tomcat-plugin.jar
Now start Virgo and after few seconds it will show up as a Tomcat 7 server in the Hyperic console.
Note that the Hyperic template for Tomcat 7 will use wrong defaults for log files etc, but you can change them after enrolling the Virgo server in the Hyperic console.

In case you wonder, the catalina.base system property has no side effect on Virgo, see here.

Patching the Agent to monitor global data sources

The current Hyperic Agent plugin for Tomcat is capable of monitoring only data sources that are defined at the Web App level.

If you are using Tomcat global data sources as we are doing (see here for details), they will not be monitored by Hyperic.

However, you can easily patch the tomcat-plugin.jar as explained here. Just remember that after patching the plug-in you must replace it both in the Hyperic Agent and in the Hyperic Server.

./agent-5.8.2/bundles/agent-5.8.2/pdk/plugins/tomcat-plugin.jar

./server-5.8.2/hq-engine/hq-server/webapps/ROOT/WEB-INF/hq-plugins/tomcat-plugin.jar

May 30, 2013

Tinybundles, a little gem for OSGi testing

In my OSGi applications I often make use of the OSGi extender pattern. An excellent example is available on Kai's Toedter blog.

As Peter Kriens explained:
In the Synchronous Bundle Listener event call back, the subsystem can look at the resources of the bundle and check if there is anything of its liking. If it is, it can use this resource to perform the necessary initialization for the new bundles.

Writing unit tests for a Synchronous Bundle Listener is often a tricky task. To test the behaviour of your extender you need to install and start (and later stop and uninstall) bundles via the OSGi API.

The API allows you to install a bundle by passing in an InputStream opened from a bundle file. So, the simplest solution usually consists in packaging a small test bundle as a binary resource in the test case  package, and to use the OSGi API from the test case class to install the bundle in the OSGi framework. This approach certainly works but is quite annoying because you need to repackage the test bundle every time you have to make a change.

Some time ago I found a better solution: the Tinybundles library. The project documentation is here, while sources for the library are available on github.

Tinybundles provides a convenient API to create an OSGi bundle from resources and classes available in your test case classpath:

TinyBundle bundle = TinyBundles.bundle()
            .add( TestActivator.class )
            .add( HelloWorld.class )
            .set( Constants.BUNDLE_SYMBOLICNAME, "test.bundle" )
            .set( Constants.EXPORT_PACKAGE, 
                HelloWorld.class.getPackage().getName() )
            .set( Constants.IMPORT_PACKAGE, 
                "org.apache.commons.collections" )
            .set( Constants.BUNDLE_ACTIVATOR, 
                TestActivator.class.getName() );

InputStream is = bundle.build(TinyBundles.withBnd());        
Bundle installed = getBundleContext().installBundle("dummyLocation", is);
installed.start();

In this way you can generate bundles on-the-fly within your test case for the purpose of testing your bundle listeners.

Simple, isn't it?

Oct 31, 2012

Global JNDI support in Virgo Server 3.5 for Tomcat


This rather long post analyses four solutions to the problem of using data sources, and more in general JNDI in Virgo, the 4th being the one I recommend and decided to use, which consists in leveraging Tomcat's built-in JNDI provider in Eclipse Virgo Server for Apache Tomcat.

If you are not interested in the reasons why I dropped the first three, jump directly to the fourth.

Even if the post is mostly focused on JDBC data sources, once Tomcat JNDI provider is exposed to the application it can be used for any type of resource, not only data sources.

Most of the credits for this solution go to my colleague Stefano Malimpensa.

1. JDBC data sources in OSGi

The most correct approach to obtain a JDBC data source in a pure OSGi enterprise application consists in using the OSGi JDBC Service (see the official OSGi JDBC specification).  In Virgo, that means using Gemini DBAccess.

However, in my humble opinion Gemini DBAccess is not an optimal solution for a number of  reasons:

  • Gemini DBAccess is currently available only for Derby, and to use a different database you need to write your own implementation. Not a big issue but some extra effort anyway.
  • To integrate DBAccess with EclipseLink you should probably use Gemini JPA, which is affected by a bug that prevents connection pooling from working and is therefore not usable in production https://bugs.eclipse.org/bugs/show_bug.cgi?id=379397
  • When using DBAccess with Gemini JPA you need configure connection parameters in each bundle's persistence.xml. I find this inconvenient because it is necessary to repackage the bundles every time the connection parameters change, and due to the modular nature of OSGi one complex application may include several bundles with persistence units.
  • If DBAccess is used without GeminiJPA, DBAccess will provide only a data source factory, and it will be the responsibility of your code to instantiate and configure the data source (e.g. pass in user name, password etc). In such case you would need to support a configuration file to let system administrators easily change connnection parameters, which is again extra effort.
  • DBAccess requires the OSGi registry, which means it would not work with legacy code or third party libraries written for J2EE. 
As the name implies, Gemini DBAccess is tailored to data base resources. If you want a single, unified approach for looking up any type of resource, then it's not a good fit for you.

2. Full JNDI in OSGi

 At this point you may want to try Gemini Naming,  which implements the OSGi JNDI service specification. Even Gemini Naming is in my humble opinion not an optimal solution:
  • There is no configuration console, nor a configuration file: the only way you can bind resources in the JNDI namespace is programmatically. This implies a lot of boring initialisation code, and you probably need to support a configuration file to let system administrators easily change configuration parameters, which is again extra effort.
  • Gemini Naming requires the OSGi registry, which means it would not work with legacy code or third party libraries written for J2EE.

3. Local JNDI declared inside a Web App


Virgo supports JNDI lookups for data sources inside a Web App. To achieve this you have to:
  • Include in the Web application the JDBC driver(s) and the pool implementation (e.g. Apache DBCP or Tomcat JDBC) as jars in your WEB-INF/lib folder
  • Configure web.xml to list the usual JNDI resource-refs
  • Include a Tomcat context.xml file in the Web App and configure it as explained here and here
The above will work for JDBC data sources but has the following draw backs:
  • You must include the JDBC driver and pool in every Web App of yours. This means that each Web App will have its own pool, even if they connect to the same database, and that you must repackage the WAR if you need to update the JDBC driver
  • JNDI lookup will work only in a thread originated by a HTTP request. This means that application bundles that are not WARs will not be able to obtain the data source via a JNDI lookup, unless their code is executed by a thread started by the Web container. In fact, the JNDI lookup will fail from threads created by Equinox: this is for example the case of code that observes OSGi framework lifecycle events (BundleListener) and need access the database when a bundle is installed or uninstalled.
In my case the main show stopper to this solution is the thread issue described above, because my application must access the database (and therefore the data source) from threads that are not always started by the Web container. If you are interested in the historical roots of this apparently strange limitation, I recommend reading this post by Neil Bartlett.

4. Tomcat global JNDI registry


Another option is available, which is not affected by any of the above issues and limitations and it consists in using Tomcat's global JNDI support. You gain a general purpose well tested and well documented JNDI registry capable of deploying any type of resource, not only data-sources (can even be extended to support custom resource types http://tomcat.apache.org/tomcat-6.0-doc/jndi-resources-howto.html#Adding_Custom_Resource_Factories).

Please note that this solution will make the global JNDI namespace available, disabling the Web App java:comp/env context. In order words, you cannot use the java:comp/env prefix in your JNDI lookups. This should be an acceptable limitation, given that the java:comp/env prefix in any case would work only within a Web App and not from a plain OSGi bundle.

In order to use the Tomcat  JNDI registry for data sources in Virgo the following mandatory steps are required:

  1. Create a bundle fragment for Catalina to extend Virgo's Tomcat with connection pooling support. The Virgo distribution contains in fact a stripped down version of Tomcat that removes the libraries required for JDBC pooling. Luckily enough, you can create a fragment to contribute the libraries back to Catalina. You just need to make sure that your fragments are placed in the bundle repository folder, not in pickup.
  2. Create a bundle fragment for Catalina to make the required JDBC driver(s) available to the server and the application
  3. Create a fragment that gets the JNDI context from Tomcat and that registers in the JVM a global InitialContextFactoryBuilder
  4. Edit tomcat-server.xml and define your global resources

All the above fragments are a convenient method for extending Tomcat/Catalina to use third party libraries whose Java packages were not originally imported by the Virgo bundles. For further details refer to this post by Glyn Normington, the project lead of Virgo. If you are working with Virgo his personal blog is a must read!



1. The pool bundle fragment

Here is a sample for Apache DBCP. The MANIFEST.MF below is added to the DBCP JAR, that's why there is no Bundle-Classpath. Mind the fragment host header.

Manifest-Version: 1.0
Fragment-Host: com.springsource.org.apache.catalina
Bundle-ManifestVersion: 2
Bundle-Name: Tomcat DBCP
Bundle-SymbolicName: org.apache.tomcat.dbcp
Bundle-Version: 7.0.27
Bundle-Vendor: apache
Bundle-RequiredExecutionEnvironment: JavaSE-1.6
Import-Package: javax.management,
 javax.management.openmbean,
 javax.naming,
 javax.sql,
 org.apache.juli.logging


2. The JDBC driver bundle fragment

Here is a sample for PostgreSQL. The MANIFEST.MF below is added to the PostgreSQL driver JAR, that's why there is no Bundle-Classpath. Mind the fragment host header.

Manifest-Version: 1.0
Fragment-Host: com.springsource.org.apache.catalina
Bundle-ManifestVersion: 2
Bundle-Name: PostgreSQL JDBC Driver
Bundle-SymbolicName: org.postgresql.jdbc.catalina
Bundle-Version: 9.2.1000
Bundle-Vendor: postgresql
Bundle-RequiredExecutionEnvironment: JavaSE-1.6


3. JNDI bridge fragment

Create a fragment with the following MANIFEST.MF that includes the two classes below and place it in the repository folder.

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: JNDITomcatBridge
Bundle-SymbolicName: jndi.tomcat.bridge
Bundle-Version: 1.0.0.qualifier
Bundle-Vendor: org.example
Fragment-Host: com.springsource.org.apache.catalina;bundle-version="7.0.26"
Bundle-RequiredExecutionEnvironment: JavaSE-1.6
Import-Package: org.apache.catalina.mbeans;version="7.0.26"

The code below consists of two classes.

The first, GlobalJNDILifecycleListener, is a GlobalResourcesLifecycleListener subclass that downcasts the server instance to get the global JNDI context and that registers in the JVM NamingManager a custom InitialContextFactoryBuilder which wraps the JNDI context obtained from Tomcat.

Other options are of course possible, including doing everything in the custom implementation of InitialContextFactoryBuilder without subclassing the listener, but whatever approach you adopt, it is important to make sure that the invocation to NamingManager.setInitialContextFactoryBuilder occurs only once in the life of a Virgo server instance, because the method will fail and raise a runtime exception if it is called more than once.


import javax.naming.Binding;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.spi.NamingManager;

import org.apache.catalina.Lifecycle;
import org.apache.catalina.LifecycleEvent;
import org.apache.catalina.Server;
import org.apache.catalina.mbeans.GlobalResourcesLifecycleListener;
import org.apache.juli.logging.Log;
import org.apache.juli.logging.LogFactory;

public class GlobalJNDILifecycleListener extends GlobalResourcesLifecycleListener {
    private static final Log log = LogFactory.getLog(GlobalJNDILifecycleListener.class);

    @Override
    public void lifecycleEvent(LifecycleEvent event) {
        super.lifecycleEvent(event);
        if (Lifecycle.START_EVENT.equals(event.getType())) {
            Server server = (Server) event.getLifecycle();
            Context ctx = server.getGlobalNamingContext();
            ContextFactory factory = new ContextFactory(ctx);
            try {                
                NamingManager.setInitialContextFactoryBuilder(factory);
                log.info("Published Global Naming as default InitialContext");
                logJNDIEntries(ctx, null);
            } catch (NamingException e) {
                log.error("Naming Exception:", e);
            }
        }

    }

    private void logJNDIEntries(Context context, String prefix) throws NamingException {

        NamingEnumeration<Binding> namingEnumeration = context.listBindings("");

        while (namingEnumeration.hasMoreElements()) {
            Binding binding = namingEnumeration.next();
            String nameEntry = binding.getName();
            String fullName = (prefix == null || prefix.equals("") ? nameEntry : prefix + "/" + nameEntry);
            String entryClassName = binding.getClassName();
            if (Context.class.getName().equals(entryClassName)) {
                Context ctx = (Context) binding.getObject();
                logJNDIEntries(ctx, fullName);
            } else {
                log.info("Found: " + fullName);
            }
        }
    }

}
import java.util.Hashtable;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.naming.spi.InitialContextFactory;
import javax.naming.spi.InitialContextFactoryBuilder;

/**
 * Simple implementation of {@link InitialContextFactory} that returns the global {@link InitialContext}
 * obtained from Tomcat
 *
 * @author giamma, stefano
 *
 */
public class ContextFactory implements InitialContextFactoryBuilder, InitialContextFactory {

    private Context context;

    public ContextFactory(Context context) {
        this.context = context;
    }
   
    @Override
    public InitialContextFactory createInitialContextFactory(Hashtable environment) throws NamingException {
        return this;
    }

    @Override
    public Context getInitialContext(Hashtable environment) throws NamingException {
        return context;
    }

}

4. tomcat-server.xml

 

Replace the listener with yours and declare your JNDI resources:
 
 <-- Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />  -->
 <Listener className="com.example.catalina.mbeans.GlobalJNDILifecycleListener"/>
 
<GlobalNamingResources>
 <Resource name="jdbc/db"
           type="javax.sql.DataSource" username="user" 
           password="password" 
           factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory" 
           driverClassName="org.postgresql.Driver" 
           url="jdbc:postgresql://127.0.0.1/db" 
           maxActive="20" maxIdle="10"/>
</GlobalNamingResources>


You can now happily perform plain old new InitialContext().lookup() from every method of every class of your OSGi application deployed in Virgo, regardless of the origin of the thread.