Background: Client's UI application is a dashboard consisting of a banner (with navigation bread crumbs & other controls) that calls into a Pentaho dashboard to render dashboard content below the banner. Their application will then be displayed as a widget within Ozone Widget Framework (OWF).
For their development environment & POC, OWF/CAS needed to be installed. Following OWF installation guides (shipped with OWF distribution), we had to create and use a self-signed certificate because they did not have a certificate from a Certificate Authority. The tomcat for OWF/CAS has the keystore specified within $OWF_HOME/certs/keystore.jks. The self-signed cert gets imported into that keystore.
To configure Pentaho, first ensure Pentaho is fully running and operational. OWF/CAS also uses HSQLDB. Therefore, there may be a port conflict between Pentaho HSQLDB and OWF HSQLDB. Easiest thing to do, if possible, is remove Sample data. Follow the instructions on InfoCenter but also delete the data connection definition within the datasource table. If the datasource is NOT deleted, tomcat hangs upon startup when attempting to connect to HSQLDB. NO error message is displayed or shows in log files and tomcat never completes startup.
Second step is to configure Pentaho to use SSL. Once again, for this client, we had to use self-signed certificate. These instructions are also on InfoCenter. After creating and importing the certificate, remember to modify tomcat/conf/server.xml to enable the SSL connector (8443). Once complete, test Pentaho running on 8443.
Third step is to now run the ant script which modifies Pentaho configuration files to perform SSO via CAS. Before proceeding, make a backup of the Pentaho directory or snapshot the VM. Once again, the steps to switch Pentaho to using CAS is documented on InfoCenter. When specifying the cas.authn.provider property, I used 'memory'. I later modified Pentaho to use JDBC to retrieve user details (authorities).
After starting up, navigating to the PUC should result in a redirect to the CAS login page. Enter your credentials as defined within OWF help guides (testUser1, testAdmin1). If using CA certificates, everything 'should' work.
But...if you see the casFailed JSP page on the browser, you may also find the following exception in the log files:
23:19:26,894 ERROR [Cas20ServiceTicketValidator] javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Searching the net, you'll find many blogs or notes on this exception. The gist of them is that communication between 2 servers is not trusted. If CA certificates are used, the certificates would be trusted. But because we used self-signed certs, we have to perform subsequent steps. The certificates within OWF/CAS keystore need to be imported into the Pentaho keystore. List all of the certs in the OWF/CAS keystore using the following command executed from the $OWF_HOME/certs directory:
keytool -list -keystore keystore.jks
Then export the certificates listed using their aliases. For example:
keytool -exportcert -keystore keystore.jks -alias owf -file owf.cer
Now import those certificate files into Pentaho's keystore. Pentaho's keystore is $PENTAHO_HOME/java/lib/security/cacerts. Using the following command, import the OWF/CAS certificates into Pentaho's keystore. Repeat as necessary for each certificate.
keytool -import -keystore cacerts -storepass changeit -noprompt -alias owf -file ${PATH_TO_OWF_CERT_FILES}/owf.cer
Restart Pentaho and integration between OWF/CAS and Pentaho using self-signed certs is complete. Users can now create OWF widgets pointing to Pentaho content (Pentaho User Console, dashboard, report, etc) and the widget will display seamlessly, without requiring the user to log into Pentaho.
Friday, June 14, 2013
Wednesday, December 23, 2009
Monitoring in JBoss
While at a client site or within a testing environment, have you ever started to wonder how many users are on the application? How is your application running with regards to memory (heap size)? Are you close to using all of the database connections in the connection pool? For answers to some questions, maybe your application container provides a status (tomcat) or monitoring screen (WebLogic).
To facilitate recording of these statistics when using JBoss, JBoss has included the ability to log/monitor JMX Mbean values. And it's not difficult to install. Once values are being logged, you no longer have to continue refreshing the JMX console to see the values updated.
For installing and monitoring of your web application(s), perform the following steps:
To facilitate recording of these statistics when using JBoss, JBoss has included the ability to log/monitor JMX Mbean values. And it's not difficult to install. Once values are being logged, you no longer have to continue refreshing the JMX console to see the values updated.
For installing and monitoring of your web application(s), perform the following steps:
- Copy monitor XML files into $JBOSS/server/server_name/deploy
- Copy $JBOSS/docs/examples/jmx/logging-monitor/lib/logging-monitor.jar into $JBOSS/server/server_name/lib
- Create monitor XML files to monitor JMX MBeans (samples below)
- DB connections
- In use
- Available
- Max Connections In Use
- JVM activity
- Heap size
- Threads
DB Connection Monitoring Sample
Here's the XML necessary to monitor a JDBC connection pool (XML comments omitted)
Here's the XML necessary to monitor a JDBC connection pool (XML comments omitted)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE server PUBLIC
"-//JBoss//DTD MBean Service 4.0//EN"
"http://www.jboss.org/j2ee/dtd/jboss-service_4_0.dtd">
<server>
<mbean code="org.jboss.services.loggingmonitor.LoggingMonitor"
name="jboss.monitor:type=LoggingMonitor,name=MY-DSMonitor">
<attribute name="Filename">${jboss.server.home.dir}/log/my-ds.log</attribute>
<attribute name="AppendToFile">false</attribute>
<attribute name="RolloverPeriod">DAY</attribute>
<attribute name="MonitorPeriod">10000</attribute>
<attribute name="MonitoredObjects">
<configuration>
<monitoredmbean name="jboss.jca:name=MY-DS,service=ManagedConnectionPool"
logger="jca.my-ds">
<attribute>InUseConnectionCount</attribute>
<attribute>AvailableConnectionCount</attribute>
<attribute>ConnectionCreatedCount</attribute>
<attribute>ConnectionDestroyedCount</attribute>
<attribute>MaxConnectionsInUseCount</attribute>
</monitoredmbean>
</configuration>
</attribute>
<depends>jboss.jca:name=MY-DS,service=ManagedConnectionPool</depends>
</mbean>
</server>
Friday, December 18, 2009
Utilizing OpenSymphony Caching (OSCache) with iBatis
Background
http://www.opensymphony.com/oscache/wiki/Configuration.html
http://www.opensymphony.com/oscache/wiki/Clustering.html
My current project is ramping up to be deployed into a centrally hosted data center to be accessed by a large volume of users. In past deployments, we could expect, at most, 200-300 users to be logged into our web application. When translating that number to active users, we can expect somewhere in the range of 25 to 100 active users at any given point in time.
With deployment into a centrally hosted environment, our anticipated user base significantly increases to be approximately 1000 concurrent, active users. With this large number of users, we wanted to investigate caching frequently accessed objects.
The major area in our web application that currently utilizes caching is at the DAO level. Our DAOs are leveraging iBatis and already using iBatis in-memory cache implementation. Moving to a centrally hosted environment, the application will be clustered. Thus, supporting fail-over and high availability. However, using the iBatis cache, we risk the possibility that users see different objects depending upon their assigned clustered instance and when the iBatis cache is refreshed. To synchronize cache flushing across the cache, we decided to investigate incorporating a distributed caching mechanism. With iBatis supporting OSCache, we decided to start there.
Implementation
As mentioned previously, iBatis documentation refers to using OSCache for distributed caching within a clustered environment. Assuming you are using maven to build your project, configuring iBatis to use OSCache is extremely easy and well documented.
- Modify the project's pom.xml to include oscache.jar (2.4) as a dependency
- Identify and modify the sqlMap / DAO that you wish to use oscache. change cacheModel type to be "OSCACHE"
- Optionally include an oscache.properties file on the class path. Within development or continuous integration (CI) environments, this file does not need included as default properties will be applied. Deployment in production or other test areas can include the file.
http://www.opensymphony.com/oscache/wiki/Configuration.html
http://www.opensymphony.com/oscache/wiki/Clustering.html
Tuesday, April 7, 2009
Retrieving a list of changes for a release in SVN
Recently for our project, we needed to review the list of SVN commits on a branch. I'm sure there are several ways that possibly even include dates. For our purposes, we wanted to review the entire list and used the following command:
svn log --stop-on-copy –v https://host/svn/project/branches/project-0.8.0902.1
And sha-bam, a nice, lengthy report showing the status of the changes committed to this branch.
Thanks goes to one of our many, in-house, resident SVN experts!
svn log --stop-on-copy –v https://host/svn/project/branches/project-0.8.0902.1
And sha-bam, a nice, lengthy report showing the status of the changes committed to this branch.
Thanks goes to one of our many, in-house, resident SVN experts!
Tuesday, August 5, 2008
Having to scale a web application?
In a recent sprint planning meeting, solution owners unveiled how a new customer would be using our web application. Of course, it's a web application and, as such, needs to support many, concurrent users. Up until now, we were looking for our application to support approximately roughly 500 users; probably not concurrent, but potentially.
With learning how the new client will use the application, we may blow those numbers out of the water, both overall users as well as concurrent. So, how do you program and application to support large, very large volumes of users? Will the application scale if we cluster the web application servers? Will performance decrease with more users? Can we just throw a bigger machine with more CPU & memory to handle the load?
I'm a big believer in the fact that scalability needs to be designed from the get-go and then monitored. While some items may not be implemented immediately due to project constraints, scalability needs to be considered from day 1 and the code watched and reviewed to ensure new designs, code, etc will not adversely affect scalability and potentially performance.
Two articles were published in TSS on scaling JEE applications that "hit the nail on the head". If you're faced with having to support volumes of users, these articles are a must read as the writer had (has?) the opportunity to pound on application to test their ability to scale under heavy load AND to analyze why they failed or succeeded.
If you're not having to program for scalability now, the articles are still an excellent read & resource!
Scaling Your Java EE Applications - Part 1
Scaling Your Java EE Applications Part 2
With learning how the new client will use the application, we may blow those numbers out of the water, both overall users as well as concurrent. So, how do you program and application to support large, very large volumes of users? Will the application scale if we cluster the web application servers? Will performance decrease with more users? Can we just throw a bigger machine with more CPU & memory to handle the load?
I'm a big believer in the fact that scalability needs to be designed from the get-go and then monitored. While some items may not be implemented immediately due to project constraints, scalability needs to be considered from day 1 and the code watched and reviewed to ensure new designs, code, etc will not adversely affect scalability and potentially performance.
Two articles were published in TSS on scaling JEE applications that "hit the nail on the head". If you're faced with having to support volumes of users, these articles are a must read as the writer had (has?) the opportunity to pound on application to test their ability to scale under heavy load AND to analyze why they failed or succeeded.
If you're not having to program for scalability now, the articles are still an excellent read & resource!
Scaling Your Java EE Applications - Part 1
Scaling Your Java EE Applications Part 2
Tuesday, July 1, 2008
SVN Merge (Trunk to Branch)
Ever have code changes that need to be pushed into a branch? Or merged back into HEAD or the trunk?
Recently (today), I had the need to merge code changes from HEAD into a newly created branch. Given the fact that my changes spanned a couple of weeks (no lectures, please, as I was on vacation :D), I could not remember all of the lines that were changed in 9 files. I didn't want to blindly copy the files into the branch as I might (shouldn't really) overwrite another developers changes.
After a quick google search and a read of a short blog posting, I had found a quick path forward. For the same reasons that caused the Jake to write a blog, I'm also writing this so that I can easily find it.
I checked in my files into HEAD and got the revision number (7200). Then I changed directories to the directory of the branch and ran the following command:
If you want to preview the changes, specify '--dry-run' which causes SVN to list the changes that will occur. Using '-r 7199:7200' causes subversion to only grab the differences between those revisions. Upon executing the command, 'svn stat' shows the modified files that you need to check into the branch.
Simple and easy.
Recently (today), I had the need to merge code changes from HEAD into a newly created branch. Given the fact that my changes spanned a couple of weeks (no lectures, please, as I was on vacation :D), I could not remember all of the lines that were changed in 9 files. I didn't want to blindly copy the files into the branch as I might (shouldn't really) overwrite another developers changes.
After a quick google search and a read of a short blog posting, I had found a quick path forward. For the same reasons that caused the Jake to write a blog, I'm also writing this so that I can easily find it.
I checked in my files into HEAD and got the revision number (7200). Then I changed directories to the directory of the branch and ran the following command:
svn merge -r 7199:7200 https://phlcvs01/svn/netcds/trunk .
If you want to preview the changes, specify '--dry-run' which causes SVN to list the changes that will occur. Using '-r 7199:7200' causes subversion to only grab the differences between those revisions. Upon executing the command, 'svn stat' shows the modified files that you need to check into the branch.
Simple and easy.
Monday, March 17, 2008
Supporting Multiple DBs using iBatis
I previously wrote about configuring multiple data sources within JBoss. When writing SQL, you may undoubtedly encounter cases where SQL is not ANSI 92 compliant. In other words, you have DB specific SQL statements. Maybe it's because of performance reasons. Maybe it's because the difference in handling sequential columns. How can you write, configure, & deploy the application without having specific SQL statements being handled in your Java code?
We encountered this issue on our current project and created an elegant, but simple solution.
Using iBatis for our Java to DB mapping framework, all of our SQL statements were contained within XML files. With the potential requirement to support multiple databases (SQL Server & at least Oracle), we wanted a way to reuse the statements that were compliant across both databases.
Step 1:
Within the MappedStatements XML files, we appended the DB name. For example, for Oracle specific SQL, the name would be 'getSomethingData-Oracle'. If the SQL was DB neutral, we omitted the DB designation. How do we modify the app to determine the appropriate SQL for the database at runtime?
Step 2:
Like good little programmers, we created a BaseDao class from which all concrete DAOs extended. Upon initialization of a DAO, our BaseDao class retrieved the connection meta data and determined the connected DB. Now, we know the runtime DB. But how do we choose the appropriate SQL?
Step 3:
We modified BaseDao to extend org.springframework.orm.ibatis.support.SqlMapClientDaoSupport providing our code the ability to retrieve a MappedStatement from the XML files. This class extension gives our code the ability to check for the existence of a MappedStatement - be it DB specific or non-DB specific.
Step 4:
Finally, within the concrete DAO classes, we called 'checkMappedStatement' when attempting to retrieve any MappedStatement. The BaseDao class handles retrieving the appropriate SQL for the runtime DB - be it specific or generic.
The concrete DAOs would retrieve the SQL from iBatis using the following construct. If DB specific SQL existed within the MappingStatement XML files for the runtime DB, that SQL would be returned.
Below is our simple BaseDao class. Quite simple.
We encountered this issue on our current project and created an elegant, but simple solution.
Using iBatis for our Java to DB mapping framework, all of our SQL statements were contained within XML files. With the potential requirement to support multiple databases (SQL Server & at least Oracle), we wanted a way to reuse the statements that were compliant across both databases.
Step 1:
Within the MappedStatements XML files, we appended the DB name. For example, for Oracle specific SQL, the name would be 'getSomethingData-Oracle'. If the SQL was DB neutral, we omitted the DB designation. How do we modify the app to determine the appropriate SQL for the database at runtime?
Step 2:
Like good little programmers, we created a BaseDao class from which all concrete DAOs extended. Upon initialization of a DAO, our BaseDao class retrieved the connection meta data and determined the connected DB. Now, we know the runtime DB. But how do we choose the appropriate SQL?
Step 3:
We modified BaseDao to extend org.springframework.orm.ibatis.support.SqlMapClientDaoSupport providing our code the ability to retrieve a MappedStatement from the XML files. This class extension gives our code the ability to check for the existence of a MappedStatement - be it DB specific or non-DB specific.
Step 4:
Finally, within the concrete DAO classes, we called 'checkMappedStatement' when attempting to retrieve any MappedStatement. The BaseDao class handles retrieving the appropriate SQL for the runtime DB - be it specific or generic.
The concrete DAOs would retrieve the SQL from iBatis using the following construct. If DB specific SQL existed within the MappingStatement XML files for the runtime DB, that SQL would be returned.
getSqlMapClientTemplate().queryForList(checkMappedStatement("getSomethingData"));
Below is our simple BaseDao class. Quite simple.
public class BaseDaoiBatis extends SqlMapClientDaoSupport {
private static final Logger log = Logger.getLogger(BaseDaoiBatis.class);
private String dbProduct = null;
protected String checkMappedStatement(String id) {
MappedStatement ms;
String statementId = id;
try {
ms = ((SqlMapClientImpl) getSqlMapClient()).getMappedStatement(id + "-" + getDbProduct());
// Look for DB specific SQL. If found, return DB specific mapping id
if (ms != null) {
statementId = id + "-" + dbProduct;
}
}
catch (SqlMapException sme) {
// If not found, use default SQL mapping
log.debug("DB-specific SQL not found, using default SQL mapping");
}
return statementId;
}
private void initDbProduct() {
DatabaseMetaData dbMetaData;
Connection conn = null;
try {
conn = getSqlMapClientTemplate().getDataSource().getConnection();
dbMetaData = conn.getMetaData();
log.info("Database product name is '" + dbMetaData.getDatabaseProductName() + "'");
if (dbMetaData.getDatabaseProductName().indexOf("SQL Server") > 0) {
dbProduct = "MSSQLServer"
} else if (dbMetaData.getDatabaseProductName().indexOf("Oracle") > 0) {
dbProduct = "Oracle"
} else {
dbProduct = dbMetaData.getDatabaseProductName();
}
log.info("Using " + dbProduct + " XML statements...");
}
catch (Exception e) {
log.error("Exception occurred obtaining database information", e);
}
finally {
if (conn != null) {
try {
conn.close();
} catch (SQLException e) {
log.error("SQLException thrown when closing connection");
}
}
}
}
}
Subscribe to:
Posts (Atom)