The idea was to provide multiple web applications running inside a Tomcat instance with configuration properties from config files located outside the WAR file of the applications. I'm currently trying to provide these configuration properties with a global naming resource (defined inside the server.xml) which is consumed by the said applications to keep the configuration container independent and to avoid direct file access from a web application. So far almost everything already works perfectly! Except that Tomcat obviously caches resources which are looked up via JNDI.
There is sort of a way to get this happen. It is not a cache, instead it is singleton or non singleton.
Take the default example that ships with Tomcat
<Resource name="UserDatabase" auth="Container"
description="User database that can be updated and saved"
<Resource name="UserDatabase" auth="Container"
description="User database that can be updated and saved"
Then you are able to decide when getObjectInstance returns a new object, or a cached object in your own code.
I have observed the tomcat 7 process memory (private bytes) was initially 1.2 GB during startup and it got increased to 3.5 GB where my server RAM size is only 4GB after running a 100 users test for 4 hours. this private bytes was not released even after stopping the test. could you please let us know any configurations that might help this or kindly analyze the situation and provide your suggestions/solutions.
Lacking more specific about the behavior that you are seeing and about the environment that you are using to run your tests, and since this is a testing environment, my suggestion to you would be to run your tests while you have a profiler hooked up to Tomcat (YourKit is an excellent profiler). The profiler will allow you to look for memory problems in your application.
That's not to say there is definitely a problem here. It is entirely possible that you could see the heap grow from 1.2 G to 3.5G legitimately. It just depends on your JVM options and the memory demands of your application.
We recently migrated to using the apache tomcat jdbc connection pooling.
We are using v18.104.22.168 of the connection pooling mechanism with tomcat 6.0.33 on Oracle 10g
Our connection pools are configured as follows:
testOnBorrow="true" validationQuery="select 1 from dual"
in order to validate your connections
We are trying to deploy 2 versions of the same application on Tomcat 7 to test its parallel deployment features for our application.
Not meeting with success, I was looking for a definitive guide for parallel deployment on Tomcat 7.
Our application is a JSP based application but we can have changes often.
The definitive reference for parallel deployment comes directly from the Apache Software Foundation: http://tomcat.apache.org/tomcat-7.0-doc/config/context.html#Parallel_deployment.
Additionally, Mark Thomas wrote a summary article explaining how this feature works on the Context Container and across clusters: http://www.tomcatexpert.com/blog/2011/05/31/parallel-deployment-tomcat-7.
Yes, this feature has been available in tomcat for a while.
You would create multiple <Host> elements
2011 has been a great year for the Tomcat Expert community. After almost 2 years of operating, the Tomcat Expert has hit its stride, unloading an array of new information, as well as keeping you up to date with the newest releases for Apache Tomcat 6 and Apache Tomcat 7. With the addition of two new Tomcat Expert Contributors, (Channing Benson and Daniel Mikusa), the Tomcat Expert community continues to build on its reputation for being the leading source for fresh perspectives and new information on how to best leverage Apache Tomcat in the enterprise.
My last article for Tomcat Expert described various aspects of the Valve construct of Apache Tomcat: some basics about how to implement and configure a valve and an example of where things could go wrong if you were unaware of the operational details. For those of you who don’t remember (or didn’t read the article in the first place), the key takeaway was that because Tomcat valves are maintained as a chain, the order in which the valves are added to the configuration (typically in conf/server.xml) is significant, and the code that implements the filter must conclude with a call to invoke the next filter in the chain.
This time we’re going to lighten things up a bit with a general survey of what valves are available and how one might put them to use. Given the imminent arrival of the winter holiday season, one might think of it as the Apache Tomcat Valve Gift Catalog. Peruse it and find just the right gift for your favorite Tomcat administrator.
For each valve, I’ll describe its functionality, the most important configuration parameters, and point out any configuration subtleties that might not be apparent from the stock documentation. that can be found at http://tomcat.apache.org/tomcat-7.0-doc/config/valve.html. If there are any less well-known attributes or “secret” parameters associated with the valve, I’ll describe them.
The AccessLogValve can be configured at the context, host, or engine level and will log requests made to that container to a file. Attributes of AccessLogValve control the directory, the filename, and the format of the data to be written, including the ability to write information about headers (incoming and outgoing), cookies, and session or request attributes.
This article is the second in a series discussing how to performance tune the JVM to better run Apache Tomcat. In the first article, we discussed the basic basic goals and how to monitor the performance of your JVM.
If you have not read the first article, I would strongly suggest reading that before continuing with this article. It is important to understand and follow the processes outlined in that article when performance tuning. They will both save you time and prevent you getting into trouble. With that, let's continue.
At this point we've covered the basics and are ready to begin examining the JVM options that are available to us. Please note that while these options can be used for any application running on the JVM, this article will focus sole only how they can be applied to Tomcat. The usage of these options for other applications may or may not be appropriate.
Note: For simplicity, it is assumed that you are running an Oracle Hotspot JVM.
Have you ever seen this scenario before? A user has deployed an application to a Tomcat server. The application works great during testing and QA; however, when the user moves the application into production, the load increases and Tomcat stops handling requests. At first this happens occasionally and for only 5 or 10 seconds per occurrence. It's such a small issue, the user might not even notice or, if noticed, may choose to just ignore the problem. After all, it's only 5 or 10 seconds and it's not happening very often. Unfortunately for the user, as the application continues to run the problem continues to occur and with a greater frequency; possibly until the Tomcat server just stops responding to requests all together.
There is a good chance that at some point in your career, you or someone you know has faced this issue. While there are multiple possible causes to this problem like blocked threads, too much load on the server, or even application specific problems, the one cause of this problem that I see over and over is excessive garbage collection.
As an application runs it creates objects. As it continues to run, many of these objects are no longer needed. In Java, the unused objects remain in memory until a garbage collection occurs and frees up the memory used by the objects. In most cases, these garbage collections run very quickly, but occasionally the garbage collector will need to run a “full” collection. When a full collection is run, not only does it take a considerable amount of time, but the entire JVM has to be paused while the collector runs. It is this “stop-the-world” behavior that causes Tomcat to fail to respond to a request.
Fortunately, there are some strategies which can be employed to mitigate the affects of garbage collections; but first, a quick discussion about performance tuning.
Valves have been an integral feature of Apache Tomcat since version 4 was introduced over seven years ago. As their name suggests, valves provide a way of inserting functionality within a pipeline, in this case, the Tomcat request / response stream. One simply writes a subclass of
org.apache.catalina.valves.ValveBase or a class that implements the
org.apache.catalina.valves.Valve interface and then adds an XML element with the valve’s classname to the appropriate configuration file (in most classes as a sub element of Engine in server.xml). Then, at some point (we’ll come back to that) in the processing of a request, your valve’s invoke method will be called. The invoke method gets passed both the Request and the Response objects, and is free to do whatever it likes with them (though having the power doesn’t mean it’s a good idea to use it).
You may be familiar with this paradigm through servlet filters used by web applications to do application-specific processing of the request / response pipeline. The key distinction between servlet filters and Tomcat Valves is that Valves are applied and controlled through the configuration of the application server. Depending on the container definition where the Valve element appears in the Tomcat configuration, the valve could be configured for all applications on the application server, a subset of applications, or even a single application (by locating the Valve element within a given Context).
This is a simple powerful model that has been written about extensively. A Google search on “tomcat valve” turns up a multitude of descriptions, examples, and “how tos”. The Reference Page on “The Valve Component” that ships with Apache Tomcat 7 documents the mechanism thoroughly along with descriptions of the valve implementations that ship by default. So why yet another article on the subject? What do I hope to add to the canon?
The effort started simply enough: The plan was to demonstrate the configuration and use of the
ThreadDiagnosticsValve that ships with VMware’s vFabric tc Server, a commercial application server based on Apache Tomcat. Additionally I would write and configure a custom Valve, in this case, a valve to exercise tc Server’s
ThreadDiagnosticsValve so that I could demonstrate its use and effects without actually having to have a misbehaving application to trigger it. Not exactly Nobel material, but I thought it would be a useful adjunct to the existing documentation and an interesting exercise.
However, it didn’t go exactly as planned and looking at the reason it didn’t is probably as helpful in understanding Tomcat Valves as the original exercise.