2011 has been a great year for the Tomcat Expert community. After almost 2 years of operating, the Tomcat Expert has hit its stride, unloading an array of new information, as well as keeping you up to date with the newest releases for Apache Tomcat 6 and Apache Tomcat 7. With the addition of two new Tomcat Expert Contributors, (Channing Benson and Daniel Mikusa), the Tomcat Expert community continues to build on its reputation for being the leading source for fresh perspectives and new information on how to best leverage Apache Tomcat in the enterprise.
As a VMware engineer dedicated to building Apache Tomcat and vFabric tc Server , I get the opportunity to see a lot of issues across the official Apache Tomcat public mailing lists, as well as VMware’s private professional support queue for both Apache Tomcat and tc Server. Typical of any software issue tracker, many of the issues logged could be avoided with a little better understanding of the Tomcat applications. Here are a few tips that may be useful to keep in mind:
There are two different types of context.xml files: one is global, and the other is specific to each web application. The problem with editing the global
context.xml file is, as its name implies, that it affects every web application running on that Tomcat instance. So for instance, if you have 10 web applications, and create a new JNDI datasource with 50 connections to the database in the global
context.xml file, you have essentially created 10 JNDI datasources with a total of 500 connections to your database and have likely completely overwhelmed your database. If you want to add a datasource to a single application, by remembering to create the datasource in the application level
context.xml file, you can avoid serious performance problems.
Occasionally companies will deploy 3 or 4 related applications on a Tomcat server that are designed to share a single datasource. As described above, placing the datasource definition either once in the global context.xml file or in 3 or 4 application specific
context.xml files will always create multiple instances of that datasource. To truly share a single datasource, it is necessary to put the definition of the datasource into the server.xml file, and then place a single resource link into the global
context.xml file. This link ensures only one instance of the datasource is ever created and when any application goes to use it, it always uses the same single instance.
Upgrading web applications can be very expensive if your storefront is the web. Weekend maintenance windows, or downtime in general can give an entire company heartburn. Survey data shows that web application downtime can cost some companies up to $72,000 per minute. Yet the cost of not constantly rolling out new features and bug fixes can equally penalize a company in the competitive online markets today.
Previously, to upgrade an application on Tomcat and avoid downtime, system administrators would have to set up multiple instances of Tomcat and do some very clever stuff with load balancers. This equals extra hardware costs as a permanent part of the company’s infrastructure.
Now with the advent of parallel deployment in Tomcat 7, you can have multiple versions of the same application installed at the same time on a single server. Users with active sessions can continue to use the old application and new users will be routed to the new version. This way, no user sessions will be interrupted, and the old application can gracefully phase out.
Parallel deployment is a function of the Context Container. The Context element represents a web application, which in turn specifies the context path to a particular Web Application Archive (WAR) file that is the application logic. Parallel deployment allows you to deploy multiple versions of a web application with the same context path concurrently. When choosing what version of the application for any given request, Tomcat will:
For organizations with large publically searchable websites, such as those found in ecommerce companies with large product catalogues or companies with active online communities, web crawlers or bots can trigger the creation of many thousands of sessions as they crawl these large sites. Normally crawling sites without relying on cookies or session IDs, these bots can create a session for each page crawled which, depending on the size of the site, may result in significant memory consumption. New in Apache Tomcat 7, a Crawler Session Manager Valve ensures that crawlers are associated with a single session - just like normal users - regardless of whether or not they provide a session token with their requests.
One of the roles I play in the Apache Tomcat project is managing the issues.apache.org servers which run the two Apache issue trackers we have—two instances of Bugzilla and one instance of JIRA. Not surprisingly, JIRA runs on Tomcat. A few months ago, while looking at the JIRA management interface, I noticed that we were seeing around 100,000 concurrent sessions. Given that there are only 60,000 registered users and less than 5,000 active users any month, this number appeared extremely inflated.
After a bit of investigation, the access logs revealed that when many of the webcrawlers (e.g., googlebot, bingbot, etc) were crawling the JIRA site, they were creating a new session for every request. For our JIRA instance, this meant that about 95% of the open sessions were left over from a bot creating a single request. For instance, a bot requesting 100 pages, would open 100 sessions. Each one of these requests would hang around in memory for about 4 hours, chewing up tremendous memory resources on the server.
2010 has been an exciting year for the Tomcat Expert community site. Created by the Apache Tomcat Experts at SpringSource, Tomcat Expert was launched in March to improve the adoption, performance and value of Apache Tomcat for enterprise users. After almost ten months of operation, we’ve been able to provide you with content from Tomcat Expert Contributors weighing in on top Apache Tomcat news and topics, including several relating to June's release of Tomcat 7.0.0 Beta, the first Tomcat 7 release. As the year winds down, we've put together a list of the most popular blog posts of the year. Additionally, we're asking you to tell us what topics you'd like to see covered more in 2011 with a content request form below.
I’ve been sharing some thoughts about what’s become a significant trend in many IT organizations, and in particular with my clients…converting Java applications from JEE Application Servers to Tomcat, and more typically Tomcat+add-ons.
Many IT organizations are re-thinking their commitment to commercial JEE Application Servers, due to both challenging business environments that drive the need for more cost effective application architectures and the need to transition to more responsive/agile applications development. When IT organizations talk about “migrating” their applications, I’ve noted that they generally are focusing on one or more of three distinct situations. These are:
I’ve been focusing on the migration of existing JEE applications to the most popular of the light weight containers, Tomcat. There are many excellent reasons to consider moving applications off of the commercial JEE servers sold by Oracle/BEA, IBM, etc. While we are focusing on the migration process, many of the business and technical decision factors apply equally well to the second and third situations.
This time, I will be discussing the technologies involved in migrating JEE application code from commercial JEE servers to Tomcat. I’d like to thank the kind (and very expert) folks at SpringSource, as well as a number of other friends around the industry, for their valuable insight regarding the technologies involved. Any errors (and opinions) are mine alone. Additionally, some of the material draws on information published by SpringSource and other open source materials found on the internet.
In my prior blog on migrating JEE to Tomcat, I discussed the fact that organizations are increasingly migrating from JEE Application Servers to other lighter weight, simpler, faster, more scalable, and definitely less costly JAVA deployment environments. Today, I’ll take a more detailed look at the reasons for such a change and the associated costs.
Organizations that choose to migrate existing applications to a new application server are typically motivated by one or more of the following goals:
File locking in Windows may prevent the directory from being deleted.
File locking on Windows is different than file locking on Unix in that on Windows a file can be locked on read. If you are redeploying a second version of an already deployed web application, you may see something like the following web application:
Setting up Apache Web Server and Tomcat Application Server for load balancing using mod_proxy_balancer.
This example uses mod_proxy_balancer as the load balancer. This configuration is useful for applications that are stateless and therefore do not require clustering or sticky-sessions. For more information, you can also check the page:
I’ve been researching one of the most interesting trends in IT development and deployment architectures; the migration of development/deployment architectures from JEE Application Servers to light weight JAVA containers. Many IT organizations have been re-thinking their commitment to commercial JEE Application Servers, due to both challenging business environments that drive the need for more cost effective application architectures and, more importantly, the need to transition to more responsive/agile applications development. When we hear IT organizations talk about “migrating” their applications, they generally are focusing on one or more of three distinct situations. These are:
In the next few blogs, I'll be focusing on the migration of existing JEE applications to the most popular of the light weight containers, Apache Tomcat. There are many excellent reasons to consider moving applications off of commercial JEE servers sold by Oracle/BEA, IBM, etc. While we are concentrating on the JEE to Tomcat migration process, many of the business and technical decision factors apply equally well to the second and third situations and many IT organizations are doing some/all of them in parallel.