Various components of Hyperion (source):
Workspace is a common window to view the contents of all Hyperion components.
Hyperion Reporting and Analysis:
One zero-footprint Web-based thin client provides users with access to content:
● Financial reporting for scheduled or on-demand highly formatted financial and operational reporting from most data sources including Hyperion Planning – System 9 and Hyperion Financial Management – System 9
● Interactive reporting for ad hoc relational queries, self-service reporting and dashboards against ODBC data sources
● SQR Production reporting for high volume enterprise-wide production reporting.
● Web analysis for interactive ad hoc analysis, presentation, and reporting of multidimensional data.
Hyperion Reporting and Analysis Architecture
Client: The client tools consist of
Workspace: It is a DHTML Zero footprint web client and provide the user interface for viewing and interacting with the reports created using Authoring studios.
Authoring Studios: These are the client interfaces to create the reports and includes-
(a) Hyperion Interactive Reporting Studio: Windows client where you can connect to different data sources including the flat files and build very interactive presentation reports like reports in simple tabular format, pivot reports, graphs and charts with drill anywhere feature which means that you don’t have to define the hierarchy or drill path and slicing and dicing and Dashboards with many features like hyperlinks to the details reports and embedded browser which can be used to view any other web application to open within the Dashboards.
(b) Hyperion Financial Reporting Studio: Windows client where you can connect to the multidimentional data sources and create highly formatted financial reports by simply dragging and dropping rows and columns and defining page breaks.
(c) Hyperion SQR Reporting Studio: Windows client where you can connect to wide range of data sources and produce high volume pixel perfect operational reports and can be scheduled.
(d) Hyperion Web Analysis: Java applet where you can connect to different data sources using JDBC and build interactive reports and dashboards.
Smart view for office: This is a tight integration with Microsoft Office tools where ou can do analysis like drill downs, keep only and remove only options, POV manager, data refresh, copying data cells and pasting to MS Word and Powerpoint which automatically gets refreshed if the data changes in the source etc. There is one more component in smart wiew which is Hyperion Visual Explorer(HVE), where again you can view the data in presentable interactive graphs and charts.
Application Layer: It consists of two parts :
1. Web Tier: It consists of two parts (a) Web server- to send and receives content from the web clients. (b) Application server- it is a J2EE application server.
Web server and application server are connected using an HTTP connector.
The web Tier hosts the web applications like workspace, web analysis, interactive , SQR and financial reporting applications.
2. Services Tier: It contains services and servers that controls the functionality of the web applications and clients. Core services handles repository information, authorization, session information, documents publication.
More to read:
http://www.youtube.com/watch?v=j5REDY8cpiM&NR=1
http://download.oracle.com/docs/cd/E12032_01/doc/nav/portal_1.htm
http://businessintelligence.ittoolbox.com/groups/technical-functional/hyperion-admin-l/ir-92-hyperion-biservice-is-not-accessible-1543368
http://businessintelligence.ittoolbox.com/groups/technical-functional/brio-l/hyperion-system-93-adding-additional-bi-service-memory-exhausted-4039255
http://businessintelligence.ittoolbox.com/groups/technical-functional/brio-l/active-x-client-hyperion-931-out-of-memory-on-db2-2288146
http://businessintelligence.ittoolbox.com/groups/technical-functional/brio-l/hyperion-out-of-memory-error-2099199
http://businessintelligence.ittoolbox.com/groups/technical-functional/hyperion-bi-l/hyperion-designer-85-out-of-memory-for-large-queries-1332854
http://essbase.ru/archives/wiki/obiee-11gr1-architecture-and-use-of-weblogic-server
https://www.packtpub.com/toc/business-analysts-guide-oracle-hyperion-interactive-reporting-11-table-contents
Wednesday, May 25, 2011
Hyperion Reporting and Analysis Architecture
Posted by My Tech Blog 2 comments
Monday, May 23, 2011
Understanding JSON: the 3 minute lesson
Source
What does it stand for?
JavaScript Object Notation.
And what does that mean?
JSON is a syntax for passing around objects that contain name/value pairs, arrays and other objects.
Here's a tiny scrap of JSON:
{"skillz": {
"web":[
{"name": "html",
"years": "5"
},
{"name": "css",
"years": "3"
}],
"database":[
{"name": "sql",
"years": "7"
}]
}}
You got that? So you'd recognise some JSON if you saw it now? Basically:
Squiggles, Squares, Colons and Commas
Squiggly brackets act as 'containers'
Square brackets holds arrays
Names and values are separated by a colon.
Array elements are separated by commas
JSON is like XML because:
They are both 'self-describing' meaning that values are named, and thus 'human readable'
Both are hierarchical. (i.e. You can have values within values.)
Both can be parsed and used by lots of programming languages
Both can be passed around using AJAX (i.e. httpWebRequest)
JSON is UNlike XML because:
XML uses angle brackets, with a tag name at the start and end of an element: JSON uses squiggly brackets with the name only at the beginning of the element.
JSON is less verbose so it's definitely quicker for humans to write, and probably quicker for us to read.
JSON can be parsed trivially using the eval() procedure in JavaScript
JSON includes arrays {where each element doesn't have a name of its own}
In XML you can use any name you want for an element, in JSON you can't use reserved words from javascript
But Why? What's good about it?
When you're writing ajax stuff, if you use JSON, then you avoid hand-writing xml. This is quicker.
Again, when you're writing ajax stuff, which looks easier? the XML approach or the JSON approach:
The XML approach:
bring back an XML document
loop through it, extracting values from it
do something with those values, etc,
versus
The JSON approach:
bring back a JSON string.
'eval' the JSON
So this is Object-Oriented huh?
Nah, not strictly.
JSON provides a nice encapsulation technique, that you can use for separating values and functions out, but it doesn't provide anything inheritence, polymorphism, interfaces, or OO goodness like that.
And it's just for the client-side right?
Yes and no. On the server-side you can easily serialize/deserialize your objects to/from JSON. For .net programmers you can use libraries like Json.net to do this automatically for you (using reflection i assume), or you can generate your own custom code to perform it even faster on a case by case basis.
Posted by My Tech Blog 0 comments
REST Approach
Source
1. What is REST?
REST stands for Representational State Transfer. (It is sometimes spelled "ReST".) It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used.
REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines.
•In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture.
RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations.
REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is.
•Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture.
REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#.
As a programming approach, REST is a lightweight alternative to Web Services and RPC.
Much like Web Services, a REST service is:
•Platform-independent (you don't care if the server is Unix, the client is a Mac, or anything else),
•Language-independent (C# can talk to Java, etc.),
•Standards-based (runs on top of HTTP), and
•Can easily be used in the presence of firewalls.
Like Web Services, REST offers no built-in security features, encryption, session management, QoS guarantees, etc. But also as with Web Services, these can be added by building on top of HTTP:
•For security, username/password tokens are often used.
•For encryption, REST can be used on top of HTTPS (secure sockets).
•... etc.
One thing that is not part of a good REST design is cookies: The "ST" in "REST" stands for "State Transfer", and indeed, in a good REST design operations are self-contained, and each request carries with it (transfers) all the information (state) that the server needs in order to complete it.
3. How Simple is REST?
Let's take a simple web service as an example: querying a phonebook application for the details of a given user. All we have is the user's ID.
Using Web Services and SOAP, the request would look something like this:
And with REST? The query will probably look like this:
http://www.acme.com/phonebook/UserDetails/12345
Note that this isn't the request body -- it's just a URL. This URL is sent to the server using a simpler GET request, and the HTTP reply is the raw result data -- not embedded inside anything, just the data you need in a way you can directly use.
•It's easy to see why Web Services are often used with libraries that create the SOAP/HTTP request and send it over, and then parse the SOAP response.
•With REST, a simple network connection is all you need. You can even test the API directly, using your browser.
•Still, REST libraries (for simplifying things) do exist, and we will discuss some of these later.
Note how the URL's "method" part is not called "GetUserDetails", but simply "UserDetails". It is a common convention in REST design to use nouns rather than verbs to denote simple resources.
The letter analogy
A nice analogy for REST vs. SOAP is mailing a letter: with SOAP, you're using an envelope; with REST, it's a postcard. Postcards are easier to handle (by the receiver), waste less paper (i.e., consume less bandwidth), and have a short content. (Of course, REST requests aren't really limited in length, esp. if they use POST rather than GET.)
But don't carry the analogy too far: unlike letters-vs.-postcards, REST is every bit as secure as SOAP. In particular, REST can be carried over secure sockets (using the HTTPS protocol), and content can be encrypted using any mechanism you see fit. Without encryption, REST and SOAP are both insecure; with proper encryption in place, both are equally secure.
4. More Complex REST Requests
The previous section included a simple example for a REST request -- with a single parameter.
REST can easily handle more complex requests, including multiple parameters. In most cases, you'll just use HTTP GET parameters in the URL.
For example:
http://www.acme.com/phonebook/UserDetails?firstName=John&lastName=Doe
If you need to pass long parameters, or binary ones, you'd normally use HTTP POST requests, and include the parameters in the POST body.
As a rule, GET requests should be for read-only queries; they should not change the state of the server and its data. For creation, updating, and deleting data, use POST requests. (POST can also be used for read-only queries, as noted above, when complex parameters are required.)
•In a way, this web page (like most others) can be viewed as offering services via a REST API; you use a GET request to read data, and a POST request to post a comment -- where more and longer parameters are required.
While REST services might use XML in their responses (as one way of organizing structured data), REST requests rarely use XML. As shown above, in most cases, request parameters are simple, and there is no need for the overhead of XML.
•One advantage of using XML is type safety. However, in a stateless system like REST, you should always verify the validity of your input, XML or otherwise!
Posted by My Tech Blog 0 comments
Portlets & Portal
Introduction
JSR 168 Vs JSR 286
Portlets Vs Servlets
Portlet Lifecycle
Posted by My Tech Blog 0 comments
Thursday, May 19, 2011
Penetration Test and Scanner Benchmark
Introduction (source from Corsaire)
Penetration testing is an often confused term. Through this guide Corsaire, a world leader in information security, provides a broad overview of what it means, why you would want it, and how to get the most out of the process.
What is a penetration test?
Why conduct penetration testing?
What can be tested?
What should be tested?
What do you get for the money?
What to do to ensure the project is a success
What is a penetration test?
Much of the confusion surrounding penetration testing stems from the fact it is a relatively recent and rapidly evolving field. Additionally, many organisations will have their own internal terminology (one man's penetration test is another's vulnerability audit or technical risk assessment).
At its simplest, a penetration-test (actually, we prefer the term security assessment) is the process of actively evaluating your information security measures. Note the emphasis on 'active' assessment; the information systems will be tested to find any security issues, as opposed to a solely theoretical or paper-based audit.
The results of the assessment will then be documented in a report, which should be presented at a debriefing session, where questions can be answered and corrective strategies can be freely discussed.
Why conduct a penetration test?
From a business perspective, penetration testing helps safeguard your organisation against failure, through:
Preventing financial loss through fraud (hackers, extortionists and disgruntled employees) or through lost revenue due to unreliable business systems and processes.
Proving due diligence and compliance to your industry regulators, customers and shareholders. Non-compliance can result in your organisation losing business, receiving heavy fines, gathering bad PR or ultimately failing. At a personal level it can also mean the loss of your job, prosecution and sometimes even imprisonment.
Protecting your brand by avoiding loss of consumer confidence and business reputation.
From an operational perspective, penetration testing helps shape information security strategy through:
Identifying vulnerabilities and quantifying their impact and likelihood so that they can be managed proactively; budget can be allocated and corrective measures implemented.
What can be tested?
All parts of the way that your organisation captures, stores and processes information can be assessed; the systems that the information is stored in, the transmission channels that transport it, and the processes and personnel that manage it. Examples of areas that are commonly tested are:
Off-the-shelf products (operating systems, applications, databases, networking equipment etc.)
Bespoke development (dynamic web sites, in-house applications etc.)
Telephony (war-dialling, remote access etc.)
Wireless (WIFI, Bluetooth, IR, GSM, RFID etc.)
Personnel (screening process, social engineering etc.)
Physical (access controls, dumpster diving etc.)
What should be tested?
Ideally, your organisation should have already conducted a risk assessment, so will be aware of the main threats (such as communications failure, e-commerce failure, loss of confidential information etc.), and can now use a security assessment to identify any vulnerabilities that are related to these threats. If you haven't conducted a risk assessment, then it is common to start with the areas of greatest exposure, such as the public facing systems; web sites, email gateways, remote access platforms etc.
Sometimes the 'what' of the process may be dictated by the standards that your organisation is required to comply with. For example, a credit-card handling standard (like PCI) may require that all the components that store or process card-holder data are assessed.
What do you get for the money?
While a great deal of technical effort is applied during the testing and analysis, the real value of a penetration test is in the report and debriefing that you receive at the end. If they are not clear and easy to understand, then the whole exercise is of little worth.
Ideally the report and debriefing should be broken into sections that are specifically targeted at their intended audience. Executives need the business risks and possible solutions clearly described in layman's terms, managers need a broad overview of the situation without getting lost in detail, and technical personnel need a list of vulnerabilities to address, with recommended solutions.
What to do to ensure the project is a success
Defining the scope
The scope should be clearly defined, not only in the context of the components to be (or not to be) assessed and the constraints under which testing should be conducted, but also the business and technical objectives. For example penetration testing may be focussed purely on a single application on a single server, or may be more far reaching; including all hosts attached to a particular network.
Choosing a security partner
Another critical step to ensure that your project is a success is in choosing which supplier to use.
As an absolute fundamental when choosing a security partner, first eliminate the supplier who provided the systems that will be tested. To use them will create a conflict of interest (will they really tell you that they deployed the systems insecurely, or quietly ignore some issues).
Detailed below are some questions that you might want to ask your potential security partner:
Is security assessment their core business?
How long have they been providing security assessment services?
Do they offer a range of services that can be tailored to your specific needs?
Are they vendor independent (do they have NDAs with vendors that prevent them passing information to you)?
Do they perform their own research, or are they dependent on out-of-date information that is placed in the public domain by others?
What are their consultant's credentials?
How experienced are the proposed testing team (how long have they been testing, and what is their background and age)?
Do they hold professional certifications, such as PCI, CISSP, CISA, and CHECK?
Are they recognised contributors within the security industry (white papers, advisories, public speakers etc)?
Are the CVs available for the team that will be working on your project?
How would the supplier approach the project?
Do they have a standardised methodology that meets and exceeds the common ones, such as OSSTMM, CHECK and OWASP?
Can you get access to a sample report to assess the output (is it something you could give to your executives; do they communicate the business issues in a non-technical manner)?
What is their policy on confidentiality?
Do they outsource or use contractors?
Are references available from satisfied customers in the same industry sector?
Is there a legal agreement that will protect you from negligence on behalf of the supplier?
Does the supplier maintain sufficient insurance cover to protect your organisation?
Standards compliance
There are a number of good standards and guidelines in relation to information security in general, for penetration tests in particular, and for the storage of certain types of data. Any provider chosen should at least have a working knowledge of these standards and would ideally be exceeding their recommendations.
Notable organisations and standards include:
PCI
The Payment Card Industry (PCI) Data Security Requirements were established in December 2004, and apply to all Members, merchants, and service providers that store, process or transmit cardholder data. As well as a requirement to comply with this standard, there is a requirement to independently prove verification.
ISACA
ISACA was established in 1967 and has become a pace-setting global organization for information governance, control, security and audit professionals. Its IS Auditing and IS Control standards are followed by practitioners worldwide and its research pinpoints professional issues challenging its constituents. CISA, the Certified Information Systems Auditor is ISACA's cornerstone certification. Since 1978, the CISA exam has measured excellence in the area of IS auditing, control and security and has grown to be globally recognized and adopted worldwide as a symbol of achievement.
CHECK
The CESG IT Health Check scheme was instigated to ensure that sensitive government networks and those constituting the GSI (Government Secure Intranet) and CNI (Critical National Infrastructure) were secured and tested to a consistent high level. The methodology aims to identify known vulnerabilities in IT systems and networks which may compromise the confidentiality, integrity or availability of information held on that IT system. In the absence of other standards, CHECK has become the de-facto standard for penetration testing in the UK. This is mainly on account of its rigorous certification process. Whilst good it only concentrates on infrastructure testing and not application. However, open source methodologies such as the following are providing viable and comprehensive alternatives, without UK Government association. It must also be noted that CHECK consultants are only required when the assessment is for HMG or related parties, and meets the requirements above. If you want a CHECK test you will need to surrender your penetration testing results to CESG.
OSSTMM
The aim of The Open Source Security Testing Methodology Manual (OSSTMM) is to set forth a standard for Internet security testing. It is intended to form a comprehensive baseline for testing that, if followed, ensures a thorough and comprehensive penetration test has been undertaken. This should enable a client to be certain of the level of technical assessment independently of other organisation concerns, such as the corporate profile of the penetration-testing provider.
OWASP
The Open Web Application Security Project (OWASP) is an Open Source community project developing software tools and knowledge based documentation that helps people secure web applications and web services. It is an open source reference point for system architects, developers, vendors, consumers and security professionals involved in designing, developing, deploying and testing the security of web applications and Web Services.
The key areas of relevance are the forthcoming Guide to Testing Security of Web Applications and Web Services and the testing tools under the development projects. The Guide to Building Secure Web Applications not only covers design principals, but also is a useful document for setting out criteria by which to assess vendors and test systems.
Scanner Benchmark
Check it out
Some userful tools:
OWASP Live CD
WebScarab
Posted by My Tech Blog 0 comments
Tuesday, May 17, 2011
Caching and Frameworks
Intro to caching and caching frameworks.
What Are the Trade-Offs Involved in Caching?
Caching is always based on compromise; a trade-off between performance, scalability, and accuracy using the various resources available. Ease of configuration is an important secondary consideration. We should consider how we balance these factors to achieve the best performance possible.
Application performance depends on efficient data distribution. It's crucial to ensure fast data access for maximum application performance. It pays to build a cache and avoid unnecessary round-trips to the datastore. By reducing traffic between the different layers of an application, you can substantially diminish the size and cost of the installation and greatly enhance the system's responsiveness.
Hibernate Caching
Distributed and Replicated Caching
Using JCache
Open Source Cache Solutions in Java
Posted by My Tech Blog 0 comments
Thursday, May 12, 2011
SecureCI
A VM containing a turn-key solution for continuous integration with source code control, build management, automated testing, security analysis, defect tracking, and project management, all using open source tools.
Check it out
Posted by My Tech Blog 0 comments
Open Source Application Server Comparison
Things to consider when choosing open source application servers
•Load balancing: random; minimum load; round-robin; weighted round-robin; performance-based; load-based; dynamic algorithm based; dynamic registration.
•Clustering & HA. Additionally: distributed transaction management; in-memory replication of session state information; no single point of failure.
•Connection pooling.
•Caching. JNDI caching. Distributed caching with synchronization.
•Thread pooling.
•Configurable user Quality of Service.
•Analysis tools.
•Low system/memory requirements.
•Optimized subsystems (RMI, JMS, JDBC drivers, JSP tags & cacheable page fragments).
•Optimistic transaction support.
Wikipedia Comparision
Openlogic (Summary | Full)
Posted by My Tech Blog 0 comments
Web Application and App Server Performance Tunning Tips
Source content is from Java Performance Tuning
The following pages have their detailed tips extracted below
•J2EE Application server performance
•Tuning IBM's WebSphere product
•WebSphere V3 Performance Tuning Guide
•Weblogic tuning (generally applicable Java tips extracted)
•Overview of common application servers
•Web application scalability.
•J2EE Application servers
•Load Balancing Web Applications
•J2EE clustering
•"EJB2 clustering with application servers"
•Choosing an application server
•Choosing a J2EE application server, emphasizing the importance of performance issues
•Implementing clustering on a J2EE web server (JBoss+Jetty)
•Tuning tips intended for Sun's "Web Server" product, but actually generally applicable.
•Various tips
•iPlanet Web Server guide to servlets, with a section at the end on "Maximizing Servlet Performance".
•Sun community chat on iPlanet
•Article on high availability architecture
--------------------------------------------------------------------------------
The following detailed tips have been extracted from the raw tips page
J2EE Application server performance (Page last updated April 2001, Added 2001-04-20, Author Misha Davidson, Publisher Java Developers Journal). Tips:
•Good performance has sub-second latency (response time) and hundreds of (e-commerce) transactions per second.
•Avoid n-way database joins: every join has a multiplicative effect on the amount of work the database has to do. The performance degradation may not be noticeable until large datasets are involved.
•Avoid bringing back thousands of rows of data: this can use a disproportionate amount of resources.
•Cache data when reuse is likely.
•Avoid unnecessary object creation.
•Minimize the use of synchronization.
•Avoid using the SingleThreadModel interface for servlets: write thread-safe code instead.
•ServletRequest.getRemoteHost() is very inefficient, and can take seconds to complete the reverse DNS lookup it performs.
•OutputStream can be faster than PrintWriter. JSPs are only generally slower than servlets when returning binary data, since JSPs always use a PrintWriter, whereas servlets can take advantage of a faster OutputStream.
•Excessive use of custom tags may create unnecessary processing overhead.
•Using multiple levels of BodyTags combined with iteration will likely slow down the processing of the page significantly.
•Use optimistic transactions: write to the database while checking that new data is not be overwritten by using WHERE clauses containing the old data. However note that optimistic transactions can lead to worse performance if many transactions fail.
•Use lazy-loading of dependent objects.
•For read-only queries involving large amounts of data, avoid EJB objects and use JavaBeans as an intermediary to access manipulate and store the data for JSP access.
•Use stateless session EJBs to cache and manage infrequently changed data. Update the EJB occasionally.
•Use a dedicated session bean to perform and cache all JNDI lookups in a minimum number of requests.
•Minimize interprocess communication.
•Use clustering (multiple servers) to increase scalability.
Tuning IBM's WebSphere product. White paper: "Methodology for Production Performance Tuning". Only non-product specific Java tips have been extracted here. (Page last updated September 2000, Added 2001-01-19, Author Gennaro (Jerry) Cuomo, Publisher IBM). Tips:
•A size restricted queue (closed queue) allows system resources to be more tightly managed than an open queue.
•The network provides a front-end queue. A server should be configured to use the network queue as its bottleneck, i.e. only accept a request from the network when there are sufficient resources to process the request. This reduces the load on an app server. However, sufficient requests should be accepted to ensure that the app server is working at maximum capacity, i.e. try not to let a component sit idle while there are still requests that can be accepted even if other components are fully worked.
•Try to balance the workload of the various components.
•[Paper shows a nice throughput curve giving recommended scaling behavior for an server]
•The desirable target bottleneck is the CPU, i.e. a server should be tuned until the CPU is the remaining bottleneck. Adding CPUs is a simple remedy to this.
•Use connection pools and cached prepared statements for database access.
•Object memory management is particularly important for server applications. Typically garbage collection could take between 5% and 20% of the server execution time. Garbage collection statistics provide a useful monitor to determine the server's "health". Use the verbosegc flag to collect basic GC statistics.
•GC statistcs to monitor are: total time spent in GC (target less than 15% of execution time); average time per GC; average memory collected per GC; average objects collected per GC.
•For long lived server processes it is particularly important to eliminate memory leaks (references retained to objects and never released).
•Use -ms and -mx to tune the JVM heap. Bigger means more space but GC takes longer. Use the GC statistics to determine the optimal setting, i.e the setting which provides the minimum average overhead from GC.
•The ability to reload classes is typically achieved by testing a filesystem timestamp. This check should be done at set intermediate periods, and not on every request as the filesystem check is an expensive operation.
WebSphere V3 Performance Tuning Guide (Page last updated March 2000, Added 2001-01-19, Authors Ken Ueno, Tom Alcott, Jeff Carlson, Andrew Dunshea, Hajo Kitzhöfer, Yuko Hayakawa, Frank Mogus, Colin D. Wordsworth, Publisher IBM). Tips:
•[The Red book lists and discusses tuning parameters available to Websphere]
•Run an application server and any database servers on separate server machines.
•JVM heap size: -mx, -ms [-Xmx, -Xms]. As a starting point for a server based on a single JVM, consider setting the maximum heap size to 1/4 the total physical memory on the server and setting the minimum to 1/2 of the maximum heap. Sun recommends that ms be set to somewhere between 1/10 and 1/4 of the mx setting. They do not recommend setting ms and mx to be the same. Bigger is not always better for heap size. In general increasing the size of the Java heap improves throughput to the point where the heap no longer resides in physical memory. Once the heap begins swapping to disk, Java performance drastically suffers. Therefore, the mx heap setting should be set small enough to contain the heap within physical memory. Also, large heaps can take several seconds to fill up, so garbage collection occurs less frequently which means that pause times due to GC will increase. Use verbosegc to help determine the optimum size that minimizes overall GC.
•In some cases turning off asynchronous garbage collection ("-noasyncgc", not always available to all JVMs) can improve performance.
•Setting the JVM stack and native thread stack size (-oss and -ss) too large (e.g. greater than 2MB) can significantly degrade performance.
•When security is enabled (e.g. SSL, password authentication, security contexts and access lists, encryption, etc) performance is degraded by significant amounts.
•One of the most time-consuming procedures of a database application is establishing a connection to the database. Use connection pooling to minimize this overhead.
Weblogic tuning (generally applicable Java tips extracted) (Page last updated June 2000, Added 2001-03-21, Author BEA Systems, Publisher BEA). Tips:
•Response time is affected by: contention and wait times, particularly for shared resources; and software and hardware component performance, i.e. the amount of time that resources are needed.
•A well-designed application can increase performance by simply adding more resources (for instance, an extra server).
•Use clustered or multi-processing machines; use a JIT-enabled JVM; use Java 2 rather than JDK 1.1;
•Use -noclassgc. Use the maximum possible heap size that also is small enough to avoid the JVM from swapping (e.g. 80% of RAM left over after other required processes). Consider starting with minimum initial heap size so that the garbage collector doesn't suddenly encounter a full heap with lots of garbage. Benchmarkers sometimes like to set the heap as high as possible to completely avoid GC for the duration of the benchmark.
•Distributing the application over several server JVMs means that GC impact will be spread in time, i.e. the various JVMs will most likely GC at different times from each.
•On Java 1.1 the most effective heap size is that which limits the longest GC incurred pause to the longest acceptable pause in processing time. This will typically require a reduction in the maximum heap size.
•Too many threads causes too much context switching. Too few threads may underutilize the system. If n=number of threads, k=number of CPUs, then: (n < k) results in an under utilized CPU; (n == k) is theoretically ideal, but each CPU will probably be under utilized; (n > k) by a "moderate amount of threads" is practically ideal; (n > k) by "many threads" can lead to significant performance degradation from context switching. Blocked threads count for less in the previous formulae.
•Symptoms of too few threads: CPU is waiting to do work, but there is work that could be done; Can not get 100% CPU; All threads are blocked [on i/o] and runnable when you do an execution snapshot.
•Symptoms of too many threads: An execution snapshot shows that there is a lot of context switching going on in your JVM; Your performance increases as you decrease the number of threads.
•If many client connections are dropped or refused, the TCP listen queue may be too short.
•Try to avoid excessive cycling (creation/deletion or activation/passivation) of beans.
Overview of common application servers. I've extracted the performance related features (Page last updated October 2001, Added 2001-10-22, Author Pieter Van Gorp, Publisher Van Gorp). Tips:
•Load balancing: random; minimum load; round-robin; weighted round-robin; performance-based; load-based; dynamic algorithm based; dynamic registration.
•Clustering. Additionally: distributed transaction management; in-memory replication of session state information; no single point of failure.
•Connection pooling.
•Caching. JNDI caching. Distributed caching with synchronization.
•Thread pooling.
•Configurable user Quality of Service.
•Analysis tools.
•Low system/memory requirements.
•Optimized subsystems (RMI, JMS, JDBC drivers, JSP tags & cacheable page fragments).
•Optimistic transaction support.
Web application scalability. (Page last updated June 2000, Added 2001-05-21, Author Billie Shea, Publisher STQE Magazine). Tips:
•Web application scalability is the ability to sustain the required number of simultaneous users and/or transactions, while maintaining adequate response times to end users.
•The first solution built with new skills and new technologies will always have room for improvement.
•Avoid deploying an application server that will cause embarrassment, or that could weaken customer confidence and business reputation [because of bad response times or lack of calability].
•Consider application performance throughout each phase of development and into production.
•Performance testing must be an integral part of designing, building, and maintaining Web applications.
•There appears to be a strong correlation between the use of performance testing tools and the likelihood that a site would scale as required.
•Automated performance tests must be planned for and iteratively implemented to identify and remove bottlenecks.
•Validate the architecture: decide on the maximum scaling requirements and then performance test to validate the necessary performance is achievable. This testing should be done on the prototype, before the application is built.
•Have a clear understanding of how easily your configurations of Web, application, and/or database servers can be expanded.
•Factor in load-balancing software and/or hardware in order to efficiently route requests to the least busy resource.
•Consider the effects security will have on performance: adding a security layer to transactions will impact response times. Dedicate specific server(s) to handle secure transactions.
•Select performance benchmarks and use them to quantify the scalability and determine performance targets and future performance improvements or degradations. Include all user types such as "information-gathering" visitors or "transaction" visitors in your benchmarks.
•Perform "Performance Regression Testing": continuously re-test and measure against the established benchmark tests to ensure that application performance hasn?t been degraded because of the changes you?ve made.
•Performance testing must continue even after the application is deployed. For applications expected to perform 24/7 inconsequential issues like database logging can degrade performance. Continuous monitoring is key to spotting even the slightest abnormality: set performance capacity thresholds and monitor them.
•When application transaction volumes reach 40% of maximum expected volumes, it is time to start executing plans to expand the system
J2EE Application servers (Page last updated April 2001, Added 2001-04-20, Authors Christopher G. Chelliah and Sudhakar Ramakrishnan, Publisher Java Developers Journal). Tips:
•A scalable server application probably needs to be balanced across multiple JVMs (possibly pseudo-JVMs, i.e. multiple logical JVMs running in the same process).
•Performance of an application server hinges on caching, load balancing, fault tolerance, and clustering.
•Application server caching should include web-page caches and data access caches. Other caches include caching servers which "guard" the application server, intercepting requests and either returning those that do not need to go to the server, or rejecting or delaying those that may overload the app server.
•Application servers should use connection pooling and database caching to minimize connection overheads and round-trips.
•Load balancing mechanisms include: round-robin DNS (alternating different IP-addresses assigned to a server name); and re-routing mechanisms to distribute requests across multiple servers. By maintaining multiple re-routing servers and a client connection mechanism that automatically checks for an available re-routing server, fault tolerance is added.
•Using one thread per user can become a bottleneck if there are a large number of concurrent users.
•Distributed components should consider the proximity of components to their data (i.e., avoid network round-trips) and how to distribute any resource bottlenecks (i.e., CPU, memory, I/O) across the different nodes.
Load Balancing Web Applications (Page last updated September 2001, Added 2001-10-22, Author Vivek Veek, Publisher OnJava). Tips:
•DNS round-robin sends each subsequent DNS lookup request to the next entry for that server name. This provides a simple machine-level load-balancing mechanism, but is only appropriate for session independent or shared-session servers.
•DNS round-robin has no server load measuring mechanisms, so requests can still go to overloaded servers, i.e. the load balancing can be very unbalanced.
•Hardware load-balancers solve many of the problems of DNS round-robin, but introduce a single point of failure.
•A web server proxy can also provide load-balancing by redirecting requests to multiple backend webservers.
J2EE clustering (Page last updated August 2001, Added 2001-08-20, Author Abraham Kang, Publisher JavaWorld). Tips:
•Consider cluster-related and load balancing programming issues from the beginning of the development process.
•Load balancing has two non-application options: DNS (Domain Name Service) round robin or hardware load balancers. [Article discusses the pros and cons].
•To support distributed sessions, make sure: all session referenced objects are serializable; store session state changes in a central repository.
•Try to keep multiple copies of objects to a minimum.
"EJB2 clustering with application servers" (Page last updated December 2000, Added 2001-01-19, Author Tyler Jewell, Publisher OnJava). Tips:
•[Article discusses multiple independent ways to load balance EJBs]
Choosing an application server (Page last updated January 2002, Added 2002-02-22, Author Sue Spielman, Publisher JavaPro). Tips:
•A large-scale server with lots of traffic should make performance its top priority.
•Performance factors to consider include: connection pooling; types of JDBC drivers; caching features, and their configurability; CMP support.
•Inability to scale with reliable performance means lost customers.
•Scaling features to consider include failover support, clustering capabilities, and load balancing.
Choosing a J2EE application server, emphasizing the importance of performance issues (Page last updated February 2001, Added 2001-02-21, Author Steve Franklin, Publisher DevX). Tips:
•Application server performance is affected by: the JDK version; connection pooling availability; JDBC version and optimized driver support; caching support; transactional efficiency; EJB component pooling mechanisms; efficiency of webserver-appserver connection; efficiency of persistence mechanisms.
•Your application server needs to be load tested with scaling, to determine suitability.
•Always validate the performance of the app server on the target hardware with peak expected user numbers.
•Decide on what is acceptable downtime for your application, and ensure the app server can deliver the required robustness. High availability may require: transparent fail-over; clustering; load balancing; efficient connection pooling; caching; duplicated servers; scalable CPU support.
Implementing clustering on a J2EE web server (JBoss+Jetty) (Page last updated September 2001, Added 2001-10-22, Author Bill Burke, Publisher OnJava). Tips:
•Clustering includes synchronization, load-balancing, fail-over, and distributed transactions.
•[article discusses implementing clustering in an environment where clustering was not previously present].
•The different EJB commit options affect database traffic and performance. Option 'A' (read-only local caching) has the smallest overhead.
•Hardware load balancers are a simple and fast solution to distributing HTTP requests to clustered servers.
Tuning tips intended for Sun's "Web Server" product, but actually generally applicable. (Page last updated 1999, Added 2000-10-23, Author ? - a Sun document, Publisher Aikido). Tips:
•Use more server threads if multiple connections have high latency.
•Use keep-alive sockets for higher throughput.
•Increase server listen queues for high load or high latency servers.
•Avoid or reduce logging.
•Buffer logging output: use less than one real output per log.
•Avoid reverse DNS lookups.
•Write time stamps rather than formatted date-times.
•Separate paging and application files.
•A high VM heap size may result in paging, but could avoid some garbage collections.
•Occasional very long GCs makes the VM hang for that time, leading to variability in service quality.
•Doing GC fairly often and avoiding paging is more efficient.
•Security checks consume CPU resources. You will get better performance if you can turn security checking off.
Various tips. For web servers? (Page last updated 2000, Added 2000-10-23, Author ?, Publisher ?). Tips:
•Test multiple VMs.
•Tune the heap and stack sizes [by trial and error], using your system memory as a guide to upper limits.
•Keep the system file cache large. [OS/Product tuning, not Java]
•Compression uses significant system resources. Don't use it on a server unless necessary.
•Monitor thread utilization. Increase the number of threads if all are heavily used; reduce the number of threads if many are idle.
•Empirically test for the optimal number of database connections.
iPlanet Web Server guide to servlets, with a section at the end on "Maximizing Servlet Performance". (Page last updated July 2000, Added 2001-02-21, Author ?, Publisher Sun). Tips:
•Try to optimize the servlet loading mechanism, e.g. by listing the servlet first in loading configurations.
•Tune the heap size.
•Keep the classpath short.
Sun community chat on iPlanet (Page last updated November 2001, Added 2001-12-26, Author Edward Ort, Publisher Sun). Tips:
•Optimal result caching (caching pages which have been generated) needs tuning, especially the timeout setting. Make sure the timeout is not too short.
Article on high availability architecture. If the system isn't up when you need it, its not performing. (Page last updated November 1998, Added 2000-10-23, Author Sam Wong, Publisher Sun). Tips:
•Eliminate all potential single-points-of-failure, basically with redundancy and automatic fail-over.
•Consider using the redundant components to improve performance, with a component failure causing decreased performance rather system failure.
Posted by My Tech Blog 0 comments
Tuesday, May 10, 2011
Where are the differences between JVMs
•JRockit
•IBM JVM
•SUN JVM
•Open JDK
•Blackdown
•Kaffe
JVM implementations can differ in the way they implement JIT compiling, optimizations, garbage collection, platforms supported, version of Java supported, etc. They all must meet set of features and behaviors so that it will execute Java bytecodes correctly.
The major difference tends to be in licensing. Other non-technical differences tend to be in free/paid support options, integration with other technologies (usually J2EE servers), and access to source code.
Note, While a J2EE server runs on the JVM, some servers have integrated tools for monitoring, analyzing, and tweaking JVM performance.
As far as technical differences, those have grown less siginificant over the years. Once upon a time, the IBM and JRockit JVM's had far supierior performance to the reference Sun implementation. This was due to significant differences in the types of runtime optiizations, differences in garbage collection, and differences in native-code (and how much native code various classes uses). These performance differences aren't as significant anymore.
Some JVM's also include or integrate with diagnostics and monitoring tools. JRockit includes a set of tools for monitoring your JVM performance. Sun provides various JMX-based tools with overlapping features to do the same. IBM Websphere once upon a time included a similar set of tools for their whole J2EE application server, not sure if they still do.
Some of the open source JVM's tend to have a little slower performance because they have been redeveloped from the ground up. As such, they've got a bit more catching up to do. Some say Blackdown was significantly slower than the Sun JVM and was also a bit behind of supported versions of Java.
Why JRockit is More Recommended for Production Environments ?
- It Has More diagnostic tools than other JDKs such as JRA
- Management console with no overhead on performance (JRMC)…which is available inside
- Better performance on Intel architectures than other JVMs
- Higher memory usage for better performance
- Great code Optimization Stretegy. (Can be Disabled using -Xnoopt is you dont want optimization)
Oracle recommends that you use JRockit JDK with your Oracle products because it has a Enhanced Garbage Collection Strategies, It have a JRMC tool inbuilt to Monitor the JVM Activities in the Runtime with very less Burden on the Server/JVM.
There are some tools allowed you to know what is the JVM doing at runtime
jdb – the debugger
jps – the process status tool, which displays process information for current Java processes
javap – the class file disassembler
javah – the C header and stub generator, used to write native methods
extcheck – a utility which can detect JAR-file conflicts
apt – the annotation-processing tool [1]
jhat – (experimental) Java heap analysis tool
jstack – (experimental) utility which prints Java stack traces of Java threads
jstat – (experimental) Java Virtual Machine statistics monitoring tool
jstatd – (experimental) jstat daemon
jinfo – (experimental) This utility gets configuration information from a running Java process or crash dump.
jmap – (experimental) This utility outputs the memory map for Java and can print shared object memory maps or heap memory details of a given process or core dump.
idlj – the IDL-to-Java compiler. This utility generates Java bindings from a given Java IDL file.
VisualVM – visual tool integrating several command-line JDK tools and lightweight[clarification needed] performance and memory profiling capabilities
jrunscript – Java command-line script shell.
There is nice post about "Open the Black Box (JVM)" with tools from different JDK vendor.
References:
Difference between JVM implementations
Why and How Oracle JRockit?
Open the Black Box - Oracle JRockit, Sun Hotspot and IBM J9 JVM Tools
Posted by My Tech Blog 0 comments