Friday, December 17, 2010

Java EE 6 - Who's In?

Been a while since I've written anything, so I'll ease into the waters with this one - it's been over a year since Java EE 6 was released with some very cool updates that I've discussed here and here and here and here and here and here and here and here and here and here and here (dang, I was busy!). So I'm interested in hearing what kind of adoption it's gotten so far. Anybody?

Now, I know that there still aren't a lot of servers that support it -- let's see, there's Glassfish, and then there's... hmmm... well, I think Resin 4 has been released... JBoss 6 isn't quite there yet, nor are any of the more expensive products, at least not to my knowledge (I'll be perfectly honest - I don't pay much attention to them!)

One that interests me is SIwpas - it's a Web Profile implementation based on Tomcat, and apparently several other open source products, although I fear it suffers from AAS (Awful Acronym Syndrome!). But the question is, is anyone using it, or the other products? I'd love to know!

M

BTW - the last time I blogged about JBoss not having a server released after an extended period of time, they released it the very next day - if I were a bettin' man, I'd put money on JBoss 6 going final tomorrow, but since I'm not, and since no one releases software on a Saturday, I'll have to go with a firm guess that'll be out soon!

Wednesday, January 6, 2010

Organize Your Logs With a Cool Java EE 6 Trick

Picture this -- it's 9:00 Friday night, and you've just gotten a phone call asking why the hell a key part of your system is down... after verifying that something's definitely busted, you open up the only resource you have -- your system logs... it doesn't take you long to find some exceptions, but they don't tell you much of the story... pretty soon, you realize there are 5 or 6 different errors being thrown, plus messages from areas of the system that appear to be working fine... to boot, it's the middle of your busiest time of the year, which means that you may have a few thousand users on the system at this very moment... yikes -- how the heck do you make heads or tails of this mess?

Logging -- no longer an afterthought

Ok, so four or five days later, when you finally sort out your issue, it's time to make things better before that happens again... it's time to actually put some thought behind your logging practices... first stop -- learn how to log, and put some standards in place! I'm not going to elaborate on the details of that article, because I think the author does a fine job... frankly, I was hooked when he defined the logs as a 'secondary interface' of your system -- your support staff (i.e. -- you) can't see what your customers are looking at in their browsers, so you need to make damn sure that you're providing enough information in your logs for you to understand what's going on!

Let's be real, though -- the traffic on your system hasn't gone down any since that fateful Friday (luckily), and you don't have the time to rework all of the logging in your system... there has to be a way to put some incremental improvements in here that will make your life easier the next time things catch fire, even if that's tomorrow...

Adding the Context without the Pain -- or a single change to your core code

Ultimately, you were able to make some sense out of that catastrophe by realizing your logging framework was providing you with a subtle piece of context -- the thread name... seems innocuous, but in most Servlet containers, it's enough to identify that each line in the log belonged to a particular thread -- or request... It's not perfect though -- you didn't have any messages in your logs that stated "Starting GET request for /shoppingcart/buySomething.html", so you couldn't tell exactly where each request started and ended... luckily, with Java EE 6 and a good logging framework, it's not hard to get there...

Before I dig in, though, let's get acquainted with the Mapped Diagnostics Context, or MDC -- hopefully, your logging system supports it (log4j does, so most folks will be covered)... MDC provides the ability to attach pieces of context to the thread of execution you're in, and allows for you to add this info on all log messages...

The following example shows a piece of code that uses the MDC in SLF4J -- a logging framework much like the Apache Commons Logging framework that can provide a single interface to multiple logging run-times -- excellent for building libraries when you don't want to impose a logging system on your users... Anyway, on to the show:

public class RequestLoggingContext implements Filter {
private static final String SESSION_CONTEXT = "session-context";

...

@Override
public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException, ServletException {
if(req instanceof HttpServletRequest) {
HttpServletRequest httpRequest = (HttpServletRequest)req;
session = httpRequest.getSession(false);

if(session != null)
MDC.put(SESSION_CONTEXT, session.getId());
}

chain.doFilter(req, resp);

if(session != null) {
MDC.remove(SESSION_CONTEXT);
}
}
}
Pretty simple -- two static methods on the 'MDC' class -- 'put' and 'remove'... while I'm not a particular fan of the static API, this is about as simple as it gets (incidentaly, this is the only 'unfortunate' use of static methods that I have seen in SLF4J -- they use the standard method of having static factory classes, but that at least makes sense, and has precedent)... so what the heck did this do? Well, we now have the ability to refer to that "session-context" as a part of our logging 'Pattern', using the "%X{session-context}" flag -- like so:
<configuration>

<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d{HH:mm:ss.SSS} [session-context=%X{session-context}][%thread] %-5level %logger{36} - %msg%n</pattern>
</layout>
</appender>

<root level="debug">
<appender-ref ref="STDOUT">
</root>
</configuration>
BTW, that is not a log4j config file -- it's a Logback config... Logback is the 'native' implementation of the SLF4J library that's written by the same folks who brought you Log4J -- kind of a 'take two', if you will... anyway, it should be obvious that it's driven heavily from Log4J's configuration :)

So we have now added context to our logging system -- and all without disturbing a single line of code in our existing system... but wait, there's more!

The Trick

One of the interesting additions to Java EE 6 is the combination of Servlet Annotations, and web fragments -- this allows library authors to self configure the use of their library, where previously the end user would need to make additions to the web.xml... a great use of Convention Over Configuration, and very powerful, indeed!

So let's take the above code sample and expand it to include a randomly generated context id for each HttpRequest, and some basic log messages to delineate the start and end of every request:
@WebFilter("/*")
@WebListener
public class RequestLoggingContext implements Filter, HttpSessionListener {
private static final String REQUEST_CONTEXT = "request-context";
private static final String SESSION_CONTEXT = "session-context";

private Logger log = LoggerFactory.getLogger(RequestLoggingContext.class);

@Inject
private ContextGenerator contextGenerator;

@Override
public void init(FilterConfig fc) throws ServletException {
}

@Override
public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException, ServletException {
MDC.put(REQUEST_CONTEXT, contextGenerator.generateContextId());

StringBuilder msg = new StringBuilder();
if(req instanceof HttpServletRequest) {
HttpServletRequest httpRequest = (HttpServletRequest)req;
HttpSession session = httpRequest.getSession(false);

if(session != null)
MDC.put(SESSION_CONTEXT, session.getId());

//Build Detailed Message
msg.append("Starting ");
msg.append(httpRequest.getMethod());
msg.append(" request for URL '");
msg.append(httpRequest.getRequestURL());
if(httpRequest.getMethod().equalsIgnoreCase("get") && httpRequest.getQueryString() != null) {
msg.append('?');
msg.append(httpRequest.getQueryString());
}
msg.append("'.");
}

if(msg.length() == 0) {
msg.append("Starting new request for Server '");
msg.append(req.getScheme());
msg.append(":\\");
msg.append(req.getServerName());
msg.append(':');
msg.append(req.getServerPort());
msg.append('/');
}

log.info(msg.toString());
long startTime = System.currentTimeMillis();

chain.doFilter(req, resp);

msg.setLength(0);
msg.append("Request processing complete. Time Elapsed -- ");
msg.append(System.currentTimeMillis() - startTime);
msg.append(" ms.");
log.info(msg.toString());

if(((HttpServletRequest)req).getSession(false) != null) {
MDC.remove(SESSION_CONTEXT);
}
MDC.remove(REQUEST_CONTEXT);
}

@Override
public void destroy() {
}

@Override
public void sessionCreated(HttpSessionEvent hse) {
MDC.put(SESSION_CONTEXT, hse.getSession().getId());
}

@Override
public void sessionDestroyed(HttpSessionEvent hse) {
}
}

All that's left is to literally throw that in its' own .jar file, put it in your WEB-INF/lib folder, and add either or both of the 'context' keys to your logging config and presto -- you have logging context! (I have omitted the definition of the ContextGenerator class for brevity -- it just generates a random string) Now your logs will look something like this:
INFO: 00:02:11.140 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO  c.m.l.support.RequestLoggingContext - Starting GET request for URL 'http://localhost:8080/Test/'.
INFO: 00:02:12.156 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO c.m.l.i.TimingLogInterceptor - Executing com.test.facade.LoadHomeFacade.loadData
INFO: 00:02:12.156 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO c.m.l.i.TimingLogInterceptor - Doing something interesting.
INFO: 00:10:36.250 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO c.m.l.support.RequestLoggingContext - Request processing complete. Time Elapsed -- 719 ms.
So now, without touching a single line of existing code or modifying a single class, we can now clearly associate any logging message in our system with other messages generated on that request, and we have clear delineation of where each request begins and ends, and how long it took to execute... pretty damn sweet! So now when your system blows up next Friday night, you'll be a bit more prepared to sort things out before the weekend is over! (just don't throw out those scripts that sort based on 'request-context'!)

Final Word

Final word? I guess that means there's more -- three things, actually... first -- there is absolutely nothing preventing you from putting the above in place if you're on an earlier version of the Java EE spec (and let's face it -- that's pretty much all of us!)... The only thing you lose is the self configuration, so you'll need to add the appropriate <filter> and <filter-mapping> elements to your web.xml

Second, if you're on Java EE 6 (wow, that was fast!), and your application already makes use of Servlet Filters, whether they're 'self configured' or not, you may need to do some configuration in your web.xml to provide an explicit ordering -- note that this is not strictly required, although it is probably a good idea :)...

And finally, I mentioned above that Log4J users were in luck when it came to supporting MDC... unfortunately, the JDK Logging API doesn't support MDC (come on! Why not! Am I the only one who seems to think they haven't advanced this API in the last five years!?) -- those users aren't entirely out of luck, though... there is a way to 'subclass' the JDK Logger and add logging info to the front or end of any logging message, although it's tricky -- unfortunately, I don't have this code handy anymore, but perhaps I'll sit down and figure it out again if I'm so inclined one day (of course, if I get feedback to do this, it might make me more inclined :) )

Now don't forget to get back and add better logging messages to your code!

M




Tuesday, December 29, 2009

@DataSourceDefinition -- A Hidden Gem from Java EE 6

In the old days, DataSources were configured -- well, they were configured in lots of different ways... That's because there was no 'one way' to do it -- in JBoss, you created an XML file that ended in '-ds.xml' and dumped it in the deploy folder... in Glassfish, you either use the admin console or muck with the domain.xml file... in WebLogic you used the web console... and this was all well and good -- until I worked with an IT guy who told me just how much of a pain in the ass it was...

Up until then, it wasn't such a big deal to me -- I set it up once, and that was that... then I ran into this guy a few jobs ago who liked to bitch and complain about how much harder it was to deploy our application than the .NET or Ruby apps he was used to... he had to deploy our data source, then he had to deploy our JMS configurations -- only then would our application work... in the other platforms, that was all built into the app (I'll have to take his word for it, since I haven't actually deployed anything in either platform)... I was a but surprised at first, and then I realized that maybe he had a point... nah, it couldn't be, he must just be having a bad day (lots of us were having bad days back then :) )...

Then I ran into Grails, which is dead simple -- you have a Groovy configuration file that has your db info in it... you even have the ability to specify different 'environments', which can change depending on how you create your archives or run your app... pretty slick...

The Gem

Well, lo and behold, we now have something that's nearly equivalent in Java EE 6 -- the @DataSourceDefinition attribute... it's a new attribute that you can put on a class that provides a standard mechanism to configure a JDBC DataSource into JNDI, and as expected, it can work with local JNDI scopes or the new global scope, meaning you can have an Environment Configuration that uses this attribute making it shareable across your server... it works like this:


import javax.annotation.sql.DataSourceDefinition;
import org.jboss.seam.envconfig.Bind;
import org.jboss.seam.envconfig.EnvironmentBinding;

@DataSourceDefinition (
className="org.apache.derby.jdbc.ClientDataSource",
name="java:global/jdbc/AppDB",
serverName="localhost",
portNumber=1527,
user="user",
password="password",
databaseName="dev-db"
)
public class Config {
...
}


As you would expect, that annotation will create a DataSource that will point to a local Derby db, and stick it into JNDI at the global address 'java:global/jdbc/AppDB', which your application, or other applications can refer to as needed... no separate deployment and no custom server-based implementation -- this code should be portable across any Java EE 6 server (including the Web Profile!)...

It's almost perfect!

In typical Java EE style, there's one thing that just doesn't appear to be working the way I'd like it -- it doesn't appear to honor JCDI Alternatives (at least not in Glassfish)... Here's what I'm thinking -- we should be able to have a different Config class for each of our different environments... in other words, we'd have a QAConfig that pointed to a different Derby db, a StagingConfig that pointed to a MySQL db somewhere on another server, and a ProductionConfig that pointed to kick ass, clustered MySQL db... we could then use Alternatives to turn on the ones that we want in certain environments with a simple XML change, and not have to muck with code... unfortunately, it doesn't appear to work -- it appears in Glassfish that it is processing them in an undeterministic order, with (presumably) the class that is processed last overwriting the others that came before it...

There is a solution, though, and it is on the lookup side of the equation -- using JCDI Alternatives, we can selectively lookup the DataSource that we're interested in, and then enable that Managed Bean in the beans.xml file... it's definitely not ideal, since we need to actually inject all of our DataSources into JNDI in all scenarios, but it works, it's something I can live with, and is probably easily fixed in a later Java EE release... Update: Looks like it's in the plan, according to this link -- thanks, Gavin :)

Here's how it works -- first the 'common' case, probably for a Development environment:


@RequestScoped
public class DSProvider {
@Resource (lookup="java:global/jdbc/AppDB")
private DataSource normal;

public DataSource getDataSource() {
return normal;
}
}


Simple enough -- has a field that looks up 'jdbc/AppDB' from JNDI, and provides a getter... now for QA:


@RequestScoped @Alternative
public class QADSProvider extends DSProvider{
@Resource (lookup="java:global/jdbc/AppQADB")
private DataSource normal;

public DataSource getDataSource() {
return normal;
}
}


Pretty much the same, except this does the lookup from 'jdbc/AppQADB', and it is annotated with @Alternative... so how do these things work together? Take a look:


@Named
public class Test {
@Inject
private DSProvider dsProvider;

...
}


Again, simple -- we're injecting a DSProvider instance here, and presumably running a few fancy queries... Nothing Dev-ish or QA-ish here at all, which is the beauty of Alternatives... finally, when building the .war file for QA, we turn on our Alternative in the beans.xml, like so:


<beans>
<alternatives>
<class>com.mcorey.alternativedatasource.QADSProvider</class>
</alternatives>
</beans>


You'll notice that this solution requires us to rebuild our .war file for QA, which I obviously don't like -- not to worry, there will be support for this in the Seam 3 Environment Configuration Module, which will effectively create a binding by mapping from one JNDI key to another... I have no idea what the syntax will look like at this point, but it should be pretty straight forward, and will allow us to -- you guessed it -- build our .war one, and copy it from place to place without modification...

M



Saturday, December 26, 2009

Say hello to the Seam 3 Environment Configuration module

A funny thing happened after my last post -- I got an email from Dan Allen, from RedHat, with some interest in making my last JCDI Portable Extension -- EnvironmentBindingExtension -- into a Seam 3 Module... pretty cool for a fairly modest effort at finding a new way to solve a problem I've faced in the past... it will be my first official foray into open source (not counting that one line NetBeans patch I submitted in, like, 2000), so it will be interesting to see how this will actually work from the authoring side, as opposed to the user side, especially in a relatively well organized project like Seam...

What it's about

The idea behind the Environment Configuration module is to inject fairly static configuration information into any JEE 6 environment... it's typically done outside of your application, in a deployment that isn't regularly deployed or updated, so you can configure each of your environments separately, including Development, Testing, QA, Staging and Production -- once this is done, you can build your application once (or better yet -- have a Continuous Integration server build it!), and copy the same binary from server to server without having to reconfigure it, ensuring that the archive that you deploy to production is the same exact archive that you tested in QA... this allows you to streamline your deployment processes, removing any possible human error involved in building your code over, and over, and over again (and in some cases, it'll save a lot of time if you have a particularly slow build!)

How's it work? It takes advantage of JNDI -- one of the resources that all JEE servers provide... say, for example, that you have a system that needs to access a database, a filesystem, and has a batch process that runs at a specific frequency -- in development, you'll want to point to a personal Derby database, use a local folder on your Windows box for your filesystem, and run the batch process very frequently for testing... QA is similar, although it has different database, but say Staging and Production run on a cluster of Linux boxes that access a MySQL database, use a mounted shared drive for its' filesystem, and have its' batch processes run once an hour...

With the Seam 3 Environment Configuration module, you can create a simple .ear file for each of these environments that contains all of this data -- create them once, deploy them once, and you're good to go... take a look at the following example of a configuration that you could use in development:


/**
* An Environment Configuration for Development
* @author Matt
*/
@EnvironmentBinding
@DataSourceDefinition (
className="org.apache.derby.jdbc.ClientDriver",
name="java:global/jdbc/AppDB",
serverName="localhost",
portNumber=1527,
user="user",
password="password",
properties={"create=true"},
databaseName="dev-db"
)
public class Config {
@Bind ("myApp/fs-root")
String rootFolder = "C:\fs-root";

@Bind ("myApp/batch-frequency")
long batchFrequencyInMs = 60 * 1000;
}


Pretty simple -- toss this class into its' own .war file, and it will define three global JNDI entries, one for each of the items mentioned above... your other applications are now free to read these resources in whatever way they need to, even using the standard @Resource(lookup="java:global/myApp/fs-root") notation... a similar configuration file would be created for QA, but perhaps the @DataSourceDefinition annotation will use a MySQL datasource, and likewise for Staging and Production...

What next?

Well, there are a few things on my list of features here, including, but not limited to:
  • Test, Test, Test!
  • Using the @Bind attribute on methods, including @Produces methods
  • Support 'unbinding', if needed
  • Create a Maven Archetype that could be used to quickly and easily setup an Environment Configuration deployment
  • Create an interface of some kind to be able to review the available findings -- either web app or simply JAX-RS based
I am, of course, interested in any ideas or feedback anyone would have, but one goal I would have here is to keep it simple and portable -- what this module is intended to do isn't exactly brain surgery, so I don't think it's necessary to throw in too many 'extras'...

M


Tuesday, December 15, 2009

External CDI Configuration with Portable Extensions

A common requirement for web and enterprise applications is that they have the capability to configure themselves for each environment without modifying the archive itself -- most commonly this is used only for environment specific attributes such as a test vs. production data store, or for Strings describing a file or directory on the file system which will be different on a developers box vs. a clustered production server, or perhaps it is a URL that points to your test payment gateway vs. your production gateway... This is the sort of thing that might easily be done with 'Alternatives' in CDI, but many shops put a premium on the ability to package the application once (on a Continuous Integration server, for example) and copy that file from development to integration to QA to staging to production, all of which are on very (very!) different platforms, using different databases, different file systems, and must integrate with different third party environments -- configuring this stuff externally means you don't have to deal with the error prone and possibly time consuming process of building for each environment... unfortunately, this doesn't appear to be a scenario that Alternatives can help us with...

One resource that works really well for this sort of configuration is JNDI... configure these items on your servers' JNDI registry independently from your application, and then have your application read the environment configuration settings from here -- and CDI makes it very easy to manage both sides of this scenario!

Reading from JNDI

The easier side of this is reading the data from JNDI, so let's start there... actually, you don't need CDI at all to start doing this -- the easiest way is to use the '@Resources' annotation provided in Java EE 5, like so:


@ApplicationScoped
public class FolderConfig {
@Resource(lookup="java:global/folderToPoll")
private String folderToPoll;

public String folderToPoll() {
return folderToPoll;
}
}


Not much to this -- we have an ApplicationScoped Managed Bean which does a lookup from JNDI, and provides a getter for the result... in this case we're pulling from the new "java:global" context that is provided with Java EE 6 -- there's no reason we couldn't map this to local context, but frankly, I wanted to fiddle with the global context :)

Ok, now on to something more interesting...

Writing to JNDI

Writing to JNDI is pretty easy -- get an InitialContext and call 'bind'... it's basically an overblown HashMap... for some reason, though, configuring JNDI outside of an application always seems to be more difficult than it should be -- several years ago, I actually had to write a JBoss plugin to do it, even though they had quite an advanced configuration mechanism for the time... all I wanted to do was put String 'A' at Key 'B', but no -- not supported out of the box!

That solution was configured by an XML file, which left me dealing with Strings... this solution is better on two accounts: 1) It can bind any Object into JNDI, and 2) it's a Portable Extension, and should therefore work on any platform... whew!

So here's how it works -- this extension would likely be packaged into a .jar library, and deployed with a simple webapp or ear archive that is packaged separately from the main application... the piece that provides the configuration is actually a class or a set of classes that are annotated to bind certain fields and/or methods into JNDI, like this:


@EnvironmentBinding
public class Env {
@Inject @Bind(jndiAddress="adminUser")
private User admin;

@Bind(jndiAddress="test") private String test = "This is a test";
}


Pretty straight forward -- what's going on here? Well, first you'll notice that the class is annotated with @EnvironmentBinding -- this is a Stereotype annotation that extends @ApplicationScoped, and acts as a marker for the class to be processed later on... further down, we have two fields that are annotated with @Bind and provided with a jndiAddress... this pretty much works as you would expect -- the value of that object is injected into JNDI, with the 'java:global/' prefix added to the front...

You'll also notice that one of the elements has its' value injected into the field -- this means that the Objects that are bound into JNDI can be derived from a more complex application if need be, so the support that we have here goes well above and beyond the simple XML file configuration that I dealt with way back when...

So how does this thing work? Well, one implementation that I put together has a two part infrastructure to do the job... remember, the end user should never be exposed to the following two items -- the extent of their exposure into this library will be the two annotations shown above...

First, our Portable Extension class:


public class EnvironmentBindingExtension implements Extension {
private Set envBeans = new HashSet();
private BeanManager beanManager;

public void discoverEnvironmentBindingClasses(@Observes ProcessBean pb, BeanManager bm) throws Exception {
this.beanManager = bm;

Bean bean = pb.getBean();
Class beanClass = bean.getBeanClass();

Set sts = bean.getStereotypes();

for (Class st : sts) {
if (st.equals(EnvironmentBinding.class)) {
log.info("Found class annotated with EnvironmentBinding: " + beanClass.getName());

envBeans.add(bean);
}
}
}

public Set getEnvBeans() {
return Collections.unmodifiableSet(envBeans);
}

public BeanManager getBeanManager() {
return beanManager;
}
}


This Extension class is pretty straight forward -- as with all Portable Extensions, it starts by implementing the 'Extension' interface... in this case, we're also creating an Observer method for the 'ProcessBean' event... this event is fired during the application startup lifecycle for every 'Bean' that is discovered in a Bean archive... this will fire for Managed Beans, EJB's, Interceptors, etc, but here, we're specifically looking for beans that have the EnvironmentBinding Stereotype on them -- that is the trigger to further process this class... in this case, our process simply consists of adding the Bean to our 'envBeans' Set for later use... in addition, we provide accessor methods for the BeanManager (which is injected into our Observer method), and the envBeans Set... Now let's have a look at what we do with these Beans...

The next class is the one that does most of the heavy lifting -- it is a Singleton EJB which is marked as a Startup bean, meaning it will be instantiated upon application startup, after the CDI discovery phases are complete... in this case, we have created a PostConstruct method to do our work for us:


@Singleton
@Startup
@ApplicationScoped
public class BindingsProcessor {
@Inject
private EnvironmentBindingExtension bindingExtension;

@PostConstruct
public void processBindings() throws Exception {
Set envBeans = bindingExtension.getEnvBeans();

log.info("Processing EnvironmentBinding Classes: "+envBeans);

Context appContext = bindingExtension.getBeanManager().getContext(ApplicationScoped.class);
for(Bean bean:envBeans) {
Class beanClass = bean.getBeanClass();

Object beanInstance = appContext.get(bean, bindingExtension.getBeanManager().createCreationalContext(bean));

Field[] fields = beanClass.getDeclaredFields();
for(Field field:fields) {
if(field.isAnnotationPresent(Bind.class)) {
field.setAccessible(true);

String jndi = field.getAnnotation(Bind.class).jndiAddress();
Object val = field.get(beanInstance);

bindValue(jndi, val);
}
}
}
}


Hey, wait a minute -- this is pretty simple, too! Iterate over the set of Beans that we've collected, use reflection to find all of the fields that are annotated with @Bind, and bind the value into the appropriate JNDI location... I've even removed the JNDI api work here, because it's not interesting at all...

This could be expanded in a couple of way, most obviously to allow methods to act as Binders as well... I do want to discuss my choice here of using the Singleton EJB as well, since I've had a few posts recently which talk about doing away with EJB's altogether -- well, initially I was attempting to use the 'AfterBeanDiscovery' or 'AfterDeploymentValidation' events to trigger this loading, but I was having trouble getting an instance of 'Env' that was capable of having its' injection points... er... injected...

The Singleton EJB is somewhat of a last-ditch sanity effort, but after considering it for a few days, I'm actually alright with it... the Startup Singleton EJB's are something that has interested me for a while, and it proves its' usefulness here, but what's more, I'm still able to take the EJB interface out of the end-user's experience here... they simply need to make use of the EnvironmentBinding annotation, and be on their merry way, as long as they are deployed in a container which supports Singletons (which all Java EE 6 containers do)... that being said, I'm hoping that Gavin will show me what the heck I was doing wrong :)

One other thing -- using an @Inject method on an ApplicationScoped bean doesn't appear to do the trick... reading the spec, it appears to be caused by the fact that ApplicationScoped beans are 'active' during Servlet calls, EJB calls, etc -- meaning it doesn't have it's own 'startup' lifecycle, but depends on the lifecycle of other Java EE component models... interesting, to be sure -- adding a more generic Startup capability would be a cinch if done similar to how I've done this...

Wow, that was a lot of words

So what does this all mean? Basically, it just shows another way of skinning that old, damn cat that is environment configuration -- but it also shows that it's pretty darn easy to put together some CDI extensions, and when working with the surrounding Java EE specs and resources, that it can be done in a minimal amount of code... in this case, I was looking at a requirement that I often have to support external configuration -- one that CDI doesn't accommodate out of the box... with a few lines of code, it turned out to be possible to break that box open and stuff some more toys inside :)

Finally, the more complete code samples can be found here -- the EnvironmentBinding project has the core code, the TestEnvironmentConfig project shows a test web application that could be used to create the binding configuration, and the EnvTest project is an application which makes use of the JNDI entries... have fun!

M



Saturday, December 12, 2009

JSR-299 Tx Interceptor code + JAX RS Sample

Quick update -- I've thrown the source code to the Transactional Interceptor from my previous post into a source repository, along with a sample webapp that shows how it can be used... they're both maven projects, and should be easy enough to download and fiddle around with -- I've tested it in Glassfish v3, but it should work in JBoss just as well...

I'm going to use this repository for a number of random tests, experiments, and whatever it is I'm fiddling with at the time -- hopefully there'll be a few more CDI projects up there, or whatever else it is that strikes my fancy at the time :)

Update: I've just thrown together a sample that shows how CDI can be combined with a JAX RS interface as well... cool, and simple!

M


Wednesday, December 9, 2009

Thoughts on JSR 330

I'm not sure what to think about JSR-330 -- what do you think?

A quick caveat before I begin -- I'm playing devils advocate here, so I have no idea if I believe any of the gibberish that I'm about to write :)

On one hand, I think it's Mostly Harmless... CDI/JSR-299 is obviously the default implementation of the spec, and perhaps having a very small portion of the DI capability abstracted so that we have the choice to use another framework if we want isn't necessarily a bad thing (even if standardizing injection targets isn't exactly ground breaking value) -- after all, if it weren't for those 'outside' framework writers, we might not have gotten EJB 3, and all the improvements that came along with Java EE 5... perhaps JSR 330 is a way to bring those folks inside of the Java EE fold, so it's not another three years before the standards catch up with the private innovation

On the other hand, is it at all likely that an application would be able to just replace the DI capability of CDI with, say, Spring or Guice, but still use the contexts, interceptors, extensions, etc? After all, Dependency Injection is actually a pretty small part of what the CDI spec provides... Perhaps, if one were to write a CDI Portable Extension that basically vetos all bean discovery, and instead uses one of the other libraries to do the injection, but that seems like a stretch to me...

What seems more likely is that another implementation would simply not use CDI, which can easily be done by not providing a beans.xml file, and using some other mechanism, like the Spring WebApplicationContext -- in that case, is there value in what the JSR-330 spec provides? Has the addition of this spec, which provides little value in-and-of itself incorporated too much confusion?

My original instinct was that it's a good addition to the Java EE fold, but I've heard a few voices of opposition (including from those who don't have a big stake in the outcome)... now I think, perhaps, that I could take it or leave it -- I haven't been convinced that it's destructive, but I'm also not convinced that it's worth it :)

What do y'all think?

M