Nov 05 2015

Downgrading from Multi-Source to Single Source

Categories: IDT,Universe Design Dave Rathbun @ 10:01 am

I mentioned a few weeks ago that I had found a way to “downgrade” a multi-sourced universe to a single-source. I wanted to create a beautiful blog post with screen shots to walk everyone through the process, but have not managed to carve out the time. 😳 Rather than continue the radio silence, I’m going to write up the steps and hopefully come back and make this post prettier later on. I also have a request to make which will be at the end of this blog post, so please read to the end, and if you can help, please do so.

Background

We, like I am guessing many Business Objects legacy customers, have been slow to dip our toes into the “multi-source” waters offered by Information Design Tool (IDT). When we did our major upgrade to BI 4 a few years back, we evaluated whether we needed to convert all of our UNV universes to UNX, and decided not to, for a variety of reasons. But for new development we are typically looking at IDT as our universe creation tool. Continue reading “Downgrading from Multi-Source to Single Source”


Oct 12 2015

Yes, Virginia, You Can Switch a Multi-Source Universe Back to Single Source

Categories: IDT,Universe Design Dave Rathbun @ 10:07 pm

What’s the first decision you have to make when creating a new universe with the Information Design Tool (IDT)? You have to specify if you want a single-source or multi-source data foundation. Once that selection is made, it cannot be changed.

Well, sort of.

We had an odd performance challenge with a particular universe. It seemed that when it was created, the developer thought they might want to eventually (perhaps) use multiple sources, so they went ahead and created a multi-sourced data foundation. But the project never ended up needing a second data source, so for over a year they’ve been using a single-source multi-source universe. (Did you follow that?) As a diagnostic tool, I thought about recreating a new universe as a single-source data foundation and resetting the various reports and dashboards so they would use the new universe. That would have been a lot of work, and with no guarantee that it would fix anything, much less have an impact on the issue.

Then I wondered to myself, what if I could figure out a way to “downgrade” the existing multi-source universe to a single source? That way I could still test my theory that our data federator engine wasn’t working as well as it should without having to re-point each report and redo each dashboard.

“spoiler alert” … it worked. 🙂 I was able to convert a multi-sourced universe to a single-sourced version without impacting the reports and dashboards, and we’ve been running on that version ever since.

How was it done? Well, I’m working on a presentation for the local DFW ASUG chapter meeting this coming Friday, and once I have that done I’ll post the details here as well. You’ll just have to wait a few days. 😛

Update: I won’t be presenting at the DFW chapter meeting after all, but I will still be posting this solution soon, hopefully next week.

Secondary note: I have also done the process in reverse… converted a single-source universe to a multi-sourced version, but with significantly more work involved. If you have a large number of reports, however; it may be easier to rework the universe and not have to re-point every report. Time will show if I’m successful or not…


Sep 19 2013

Using OLAP Functions to Extend Calendar Capabilities

Categories: Dynamic Dates,Universe Design Dave Rathbun @ 10:08 am

I think it’s probably a safe bet to suggest that just about every data warehouse (or even transactional system) has some sort of calendar table. In many cases, the unique key for this table might be the natural key of the date itself, or perhaps it’s a system-generated surrogate key. That doesn’t really matter for this post. What I want to do is show one idea of how I used an OLAP aggregate function called row_number() to extend my calendar functionality, and make it really easy to schedule reports for the “last three months” given an input date. Continue reading “Using OLAP Functions to Extend Calendar Capabilities”


Aug 29 2013

BI4 UNV Versus UNX … Which Do You Choose?

Categories: IDT,Universe Design Dave Rathbun @ 7:54 am

When SAP released BI4 several years ago it featured a major upgrade to one of the core technologies used by Business Objects since the beginning of the company: the universe. What does this mean for you and how does it impact your intentions to move forward with the latest and greatest offering from SAP? Many of you know that I currently work for a fairly large company, and large companies are often slower to move on to new technologies as they’re released. I have not talked a lot about BI4 in my blog yet primarily for that reason. However, we’ve had over a year to review the new Information Design Tool (IDT) and the BI4 .UNX format, and I’m finally ready to share some thoughts. Continue reading “BI4 UNV Versus UNX … Which Do You Choose?”


Jul 25 2013

Updated Strategy Renders Schema Change SDK Tool Obsolete

Categories: Universe Design,VBA Tools Dave Rathbun @ 10:23 am

Many years ago when I first started working with Teradata we had a challenge. The DBA team had defined a standard where each table was tagged to indicate the purpose of the table. Specifically, and development table would be called PROJ_D.TABLE_NAME and the equivalent table in production was called PROJ_P.TABLE_NAME. Why do this? In a Teradata connection I connect to a server, not to a schema (or instance as Oracle would call it). One of the DBA strategies was to use the QA or “system test” hardware as a preliminary DR (disaster recovery) platform. That means they keep a copy of the tables in PROJ_S and the same table exists as PROJ_P on the same server. In order to have specific queries sent to the database I had to include the schema name (or prefix) on every table name in my universe. During DR testing I could reset my schema from PROJ_S to PROJ_P without moving the universe (it’s still on the QA server) and now I would be using the production tables.

While this did provide the advantage of being specific, it also presented a problem during migrations because I had to change my code. I first wrote about this back in 2008 when I shared a utility that I wrote to make this process easier. (Read the details in the Using the Designer SDK to Ease Migrations post.)

With the advent of BI4 the new challenge is that we don’t (yet) have an SDK. At the same time we have become a much more mature Teradata shop. Our DBA team recently introduced a new strategy that eliminates the issue altogether, meaning I don’t have to worry about either of the issues listed above.

New View Layer

Our standard for reporting says that our tools (Business Objects included) never access the source tables. As a result, the naming convention I described above was used on all of the views that I used in my universe. Assume we have views called PROJ_D.TABLE1, PROJ_D.TABLE2, and PROJ_D.TABLE3. In the “old days” I would have to update each of these tables when I want to migrate my universe out of the development environment. Our new strategy seems simple on the surface, probably because it is :), but it solves both issues.

For each and every table that I am using we have now created a “semantic layer” view on top of the base view (which is on top of the base table). So for each of the above tables I know have this:

create view SEM_PROJ.TABLE1
as
select * from PROJ_D.TABLE1

The “PROJ” portion of the schema related to the project, so it remains as part of the view name. The “SEM” prefix is of course short for “semantic layer” and indicates that this view is used by some sort of external tool. What is missing from the new view name is the location tag (either _D or _S or _P for production). This seems like a very simple solution (and it is) but it took a while for us to get here. We have created the semantic layer views for one project so far, and it’s a real pleasure to be able to migrate a universe without touching it anymore. 😎 I anticipate we’ll be using this strategy for all of our Teradata projects from this point forward. Obviously the select clause changes in each environment, but finally I have a consistent view name in all environments.

When I have to reset my universe due to a disaster recovery exercise, it’s now up to the DBA team to re-point the semantic layer views to the proper schema. When I migrate the universe I no longer have to touch anything, except to perhaps change a connection. It’s a much cleaner process, and no longer requires me to be concerned about waiting for the full SDK to become available in BI4.


Jul 13 2012

Specifying Oracle Date Formats

Categories: Universe Design Dave Rathbun @ 4:22 pm

I had a question from a business user the other day. I was confident that I knew the answer as soon as I saw the first part of her question. 🙂 To be sure, I waited until we got through the entire discussion. Her question started off with:

I am doing date math — using nvl to set a date to jan 01 2099 when it is null…

Right at that point I could have made good money by betting that she was getting a “non-numeric character was found where a numeric…” message. Sure enough, I was correct. The code she was trying to run looked like this:

NVL(TABLE.DT_COL, '01-JAN-2099')

What’s wrong with that? On the surface, nothing. She was looking at a specific date column from a table, and if that date was null, she was replacing it with a specific (constant) date value. Or at least that was her intent. The problem was that the intentions were being interpreted differently depending on where the report was refreshed. It was working in Desktop Intelligence and failing in Web Intelligence. Why was that, and how could it be fixed?

Oracle Implied Conversions

The problem is that if I don’t specifically tell Oracle how I want my string to be converted to a date, then it will make a guess. Oracle will attempt to parse the string and determine the date, and it will look at the NLS_DATE_FORMAT for hints. However that configuration setting can be overridden by a variety of factors. Business Objects also has a variety of places where date formats can be specified and they can get in the way too.

The way I have generally solved this is to remove any guessing from the process by explicitly defining my date format. Rather than provide the code as written above, I would do the following:

NVL(TABLE.DT_COL, TO_DATE('01-JAN-2099','DD-MON-YYYY'))

Now my string is converted to a date using my supplied format mask, and the dreaded “non-numeric found…” message can be avoided.

I made this suggestion to my business user, and after a quick universe update all of her issues were resolved. The bottom line is that I don’t always have to explicitly define everything, but if I have a chance to do so, I generally try to do so. It may take me a little longer up front, but it will me save time in the long run. Even if someone changes the Oracle standard date format on my server the code I provided will continue to work. 😎


Jun 04 2012

Online Information Design Tool Tutorials

Categories: Universe Design Dave Rathbun @ 7:44 am

SAP has put out quite a few tutorials on the new Information Design Tool (IDT) in their new community. The table of contents includes topics like “Concept: Navigate the interface” and “Concept: Create a universe” to more detailed tasks such as “Create a data foundation based on a single source relational database” and “Create a shortcut join.” Altogether there are 133 video tutorials currently posted on YouTube. If you’re new to the IDT or are wondering how to do common tasks that you’re used to from the original universe design tool, these tutorials look like they will really help.

Note: at this point I have only watched a few of the videos, and the audio sounds computer generated. But if you don’t have access to training, or are familiar with universe design but aren’t clear where some of your favorite features went, they should prove very useful. Just skip the first 45-50 seconds to get past all of the introductory template stuff. 😎

Official Product Tutorials – SAP BusinessObjects Information Design Tool


Jun 01 2012

Old Work Flow Equivalents In IDT

Categories: Universe Design Dave Rathbun @ 11:09 am

While looking for something else, I found a document from SAP that details how “legacy” work flows from the original universe designer can be recreated (or not) in the Information Design Tool. Because of the risk of SAP changing their URL structure (what! that never happens… 😉 ) I have also downloaded the PDF and hosted it here on my blog. If the link above works, please use it as SAP is more likely to have the most up-to-date version. If the link is broken, you can try downloading the version of the document that I am hosting here on my blog.

Download Link: IDT legacy workflows.pdf


Apr 03 2012

Projection Function Automatic Update Process Irritation

Categories: Universe Design Dave Rathbun @ 10:23 am

I have known for years that using the “Tables” button on a universe object can be problematic. The technique is designed to allow a developer to include extra tables in an object definition without directly referencing them in the select statement. This could be done for a number of reasons, and it’s one of the reasons I designed the time-sliced solution the way I did, just to avoid this table update issue.

Recently I discovered the same “feature” is present in the measure projection function assignment logic as well. I wrote about how important it is to properly coordinate the SQL aggregate and the projection function some time back. I have also talked about how the database delegated projection function works. This week I had the need to create a whole set of database delegated measures, which went fine. Then later I had to update the SQL statement for these measures to correct for a divide by zero issue. After publishing I went back and checked, and every single one of these measures (about 90 of them) were reset back to a Sum projection function. 😡

It seems that the projection function works the same way that the tables button does. Every time I touch this object, I will have to remember to reset the projection function. To avoid that, I used the @Select() function for all of the time sliced objects and had them reference the base measure object, which was itself using the @Select() function to reference a measure “bit” object. I wasn’t sure I could nest one @Select() inside of another, and I was actually fairly certain I had tried that before. But apparently in 3.1 it works now.

My time-sliced objects generally have this formula:
Base Object: Sum(@Select(Measure Bit))
Time Slice Object: Sum(@Select(Measure Bit)) * @Select(Time Slice Bit)

Now I’ve done this:
Base Object: Sum(Select(Measure Bit))
Time Slice Object: Select(Base Object) * @Select(Time Slice Bit)

Notice that in the second case there is no Sum() function on the time slice object? That’s because there already is one in the base object, and I cannot nest aggregate functions. But nesting @Select() seems to work okay.

The bottom line is that there are certain automatic features of the universe designer software that I wish I could turn off. Most of the time they’re fine, but when they keep resetting important items on object definitions, and do so without notifying me that they’re doing it, that’s irritating.


Dec 21 2011

Why Can’t I Validate Prompts?

Categories: Prompts,Rants Dave Rathbun @ 9:23 am

One of the possible enhancements we have been requesting for years is the ability to validate prompts. (We’ve also been looking for the ever-so-popular ability to use a formula such as “Today()” as a default for a prompt but this is different.) If we had true cascading prompts in Web Intelligence that would eliminate one use case for validated prompts but not all. I had someone comment on my blog recently asking about how to validate one prompt selection against another and that started me thinking… what would something like this look like if we did get it? Continue reading “Why Can’t I Validate Prompts?”


Next Page »