×
UX/Mobile/Responsive

nVision performance tuning

By Larry Grey • July 26, 2005

It’s surprising to me how few people know how to effectively tune nVision. Even from within PeopleSoft, there didn’t seem to be many people who could do it. One side benefit of knowing the “secret handshake” was that I got a free trip to Paris to help Carrefour.

Over the years, we added new features to the product to make it easier to tune nVision, and believe it or not, it’s easier to tune nVision now than it ever has been in the past. Hopefully this weblog entry will help you understand how to approach it.

Why is it so challenging?

Well, actually, it’s challenging to tune almost any product that does what nVision does. Because data is stored at a very granular level, tools like nVision need to aggregate the data to get results. In addition, the flexibility for using trees in nVision can make it more challenging to tune than a star schema in a data warehouse (but the trade-off with a warehouse is that you are more limited in what you can do with it, believe it or not).

In many ways, I believe that the biggest challenge is more cultural than anything else. Because nVision makes it extremely easy to create “analytical” reports and view data in different ways, many people don’t expect that they have to manage it as closely as they have to manage a data warehouse (although there are may data warehousing techniques not used by nVision that would dramatically improve performance).

How to approach tuning

The performance of nVision is driven by two different aspects of how data is stored and how it accesses it. These are:

  1. How easy it is to find the rows of data it needs to report on.
  2. How many rows have to be aggregated to get a number on a report.

Let’s talk a little bit about each of these.

Finding rows

The first topic is the first place you should look when analyzing performance.

nVision accesses data very much in the same manner that a ROLAP tool does. There is a centralized fact table (often the ledger), with many different “dimensions” (or chartfields) that are used to group and filter the results. There are two key differences between the ledger tables in PeopleSoft and a well-designed star schema. The first is that in a star shema, the dimension tables are always joined to the fact table to minimize the impact of indexing the fact table. In nVision, dimension tables are only joined when needed for filtering and grouping the data, which means that the focus needs to be on indexing the fact (or ledger) table instead of the dimension tables.

Another difference is that nVision does not use a star or snowflake schema in the dimensions. It uses a proprietary structure called a tree, which provides much more flexibility in how you model hierarchies than a star schema would support. It also allows many different hierarchies to be used interchangably with the fact table (which also is a big bonus). However, this also means that the rules for flattening the tree and generating SQL needs to be managed closely.

Therefore every organization needs to look at two things when managing the performance of nVision: indexing (especially on the fact table), and the tree performance options (which drive how the SQL is generated by nVision).

The most successful organizations are ones who take a methodical approach to both of these (this is how one organization is able to generate around 200,000 nVision instances each month on a 4-way NT box running nVision).

When looking at indexing, it is important to look at which fields are constrained and grouped on the most (and analyzing the cardinality or number of values of each). Quite often, putting the accounting period as the first field in the index is the best place to start, because there are at least 12 values, and they are always used to constrain data. Account is often the second field, because you almost always constrain against the account field (otherwise you would get meaningless financial statements).

When looking at tree performance options, the best approach is to go through all the permutations of options for each tree used (focus on one tree at a time), to determine which option will work best for a given report and data set. This is the approach that the high-volume customer used. They also prioritized their reports by the ones that had the worst performance and focused on them one at a time. They also had a defined performance goal for their reporting window, so that they could claim success when meeting that goal.

Aggregating Rows

When you’re confident that you’ve done a good job of indexing and setting performance options, the next place to look is the number of detail rows you’re reporting on. Because nVision is all about summing up details to get summary values (that’s what a financial statement is), you will get into performance issues merely because each number on a consolidated financial statement may represent tens of thousands of detail rows in the database (and all of them need to be aggregated to get that number).

A good way to determine the affect of this is to do a a SQL trace of your nVision report and do the following. Any SQL statements that perform a sum of a numeric field (generally on the ledger table), edit the traced SQL to change it to count instead. Run the SQL with the count in it, and each number in the results is a count of the number of rows that needed to be selected to get a number in the report. Now, depending on your performance options you may not have the other tables populated so it’s a good bet to make sure that you have the report open in the nVision designer when you run this trace.

If each number on a report requires more than 100 rows, it’s a good bet that a summary ledger would be required to improve performance. The trade-off you make by using a summary ledger is that you are summarizing the data in batch versus online. Therefore, if you can have many reports use the same summary ledger, you are making the most of this trade-off.

For populating the summary ledger, you can either use the GL program to do this or use a little trick that I’ve had many customers use –> a materialized view. In both circumstances, you’ll want to populate the summary ledger definition, so the reporting tools know how to access them. However, for the materialized view, you can let the database do the aggregation for you instead of running a program to do it (much better from a performance perspective). The key, however is that a materialized view will only support aggregating by dropping a chartfield in the summary table (wherease the batch program will allow you to summarize to nodes in a tree).

More on SQL tracing in nVision

It’s impportant to note that in PeopleTools 8.44, we improved the tracing so that you can easily turn it on and get the SQL when running nVision on the process scheduler server (it’s an option on the process scheduler configuration). Because the trace is captured and managed with the process, you don’t have to go through a long log of SQL to determine which statements are associated with which reports. Additionally, it eliminates the need to try to trace it in the design environment (which has just enough differences in how it works to make it an ineffective way to capture it and manage performance).

Not sure how understandable this is. I’m more than happy to clarify any area in a future posting that is confusing.

Labels: , ,

Stay Updated

One Reply to “nVision performance tuning”

Comments are closed.

Request a Demo