Feb 26 2014

Big Universe + Security Profiles = Slow Query Generation

Categories: Rants, Report Techniques, Web Intelligence Dave Rathbun @ 6:01 pm

The actual origin of the concept of a “red herring” is unknown, but that doesn’t stop it from causing grief while trying to diagnose a performance issue. If you are not familiar with the concept, a red herring is something that initially appears to be relevant but ultimately is proved to have nothing to do with the actual issue. It’s a popular technique for mystery novels… and in tech support calls.

Case in point: Today I had to help someone who was wondering why their report took over thirty seconds to display a prompt window when there was only one prompt in the document. Clearly it was a prompt issue, right? Or something related to the list of values definition for that object? Continue reading “Big Universe + Security Profiles = Slow Query Generation”


Dec 11 2013

Diversified Semantic Layer Guest Appearance

Categories: General Dave Rathbun @ 1:21 pm

Since this was actually recorded and published several weeks ago, I guess I’m late to the party. You may have already seen this if you follow the Diversified Semantic Layer, but in case you haven’t, I was a guest on their video podcast a few weeks ago. Eric and Josh hosted Derick and me in an hour-long discussion for all things semantic layer.

Universe Design Hacks

It was a ton of fun, even if it looks like I can barely keep my eyes open. Trust me, it was Josh (based in Australia) that should have been the sleepy one! We talked about subjects ranging from the very specific (why put aggregate functions on every measure) to more broad (do you let your business users build universes) while Eric tried to keep us on track. There were quite a few topics that we agreed we should come back to and cover in more detail.

And then I showed yet another trick during my DFW ASUG chapter session, which caused Eric to tweet this:

What can I say, we only had an hour? :)


Sep 19 2013

Using OLAP Functions to Extend Calendar Capabilities

Categories: Dynamic Dates, Universe Design Dave Rathbun @ 10:08 am

I think it’s probably a safe bet to suggest that just about every data warehouse (or even transactional system) has some sort of calendar table. In many cases, the unique key for this table might be the natural key of the date itself, or perhaps it’s a system-generated surrogate key. That doesn’t really matter for this post. What I want to do is show one idea of how I used an OLAP aggregate function called row_number() to extend my calendar functionality, and make it really easy to schedule reports for the “last three months” given an input date. Continue reading “Using OLAP Functions to Extend Calendar Capabilities”


Sep 06 2013

Unmerging Dimensions in Web Intelligence

Categories: Web Intelligence Dave Rathbun @ 2:02 pm

One of the things that I really wish SAP had left alone during the rewrite of Web Intelligence between XI 3 and BI4 is the merging interface. The way you merged dimensions in XI 3.x was brilliant, and gave the report developer an excellent interface to use to manage their merged dimensions. In BI 4 for some reason it looks like they took their design ideas from Desktop Intelligence instead. I was reminded of this today when I tried to “unmerge” (demerge?) two dimension objects in BI4. Continue reading “Unmerging Dimensions in Web Intelligence”


Aug 29 2013

BI4 UNV Versus UNX … Which Do You Choose?

Categories: IDT, Universe Design Dave Rathbun @ 7:54 am

When SAP released BI4 several years ago it featured a major upgrade to one of the core technologies used by Business Objects since the beginning of the company: the universe. What does this mean for you and how does it impact your intentions to move forward with the latest and greatest offering from SAP? Many of you know that I currently work for a fairly large company, and large companies are often slower to move on to new technologies as they’re released. I have not talked a lot about BI4 in my blog yet primarily for that reason. However, we’ve had over a year to review the new Information Design Tool (IDT) and the BI4 .UNX format, and I’m finally ready to share some thoughts. Continue reading “BI4 UNV Versus UNX … Which Do You Choose?”


Aug 13 2013

Mastering Business Analytics with SAP

Categories: Conferences Dave Rathbun @ 12:27 pm

I have said this before: if you ever get invited to speak at a conference hosted by The Eventful Group, don’t wait, don’t think, just do it! They treat speakers fantastic, and their events are well put together. Thanks to Sophie, Debra, and crew for making my stay in Melbourne wonderful despite the fact that my luggage got there 36 hours after I did…

Josh Fletcher tweeted the following link to a collection of tweets and pictures on Storify that summarize the event:

I had two sessions. The first one on Monday was scheduled opposite Mico so I didn’t have that many people attend. ;) In my first session I talked about PepsiCo and our initial success (after a long and winding road) with Explorer. I will be repeating this talk (with some minor changes) at the SBOUC event coming up in California. On Tuesday I had a second session talking about the Information Design Tool and various items to consider when upgrading to BI4. I will be posting more about that topic here on my blog in the coming months.


Aug 01 2013

Is External Data Always Good?

Categories: Rants Dave Rathbun @ 6:43 pm

Note to readers: I started this post back in 2011. After taking a break from blogging I am going back and looking through some of my old drafts and seeing what might still be current, and what has expired. I thought that this one merited some additional attention and publication, even though some of the notes are from two years ago. — Dave

SAP had some fun on the BI 4.0 launch in New York a while back. For years SAP (and other vendors) have been talking about their ability to bring in external data from various social medial sources. Two SAP presenters at the launch event took a vote via Twitter as to which tie would meet the “Scissors of Destiny” at the end of the session. (Steve Lucas made an impassioned plea to save his tie, which he said was a gift from his wife, versus Dave’s tie which he “… just bought last night.” Steve won, and his blue tie survived.) It was a fun display of technology, but is it really that important? How impressive would it have been if the “fail whale” had picked that moment to make an appearance?

I don’t usually spend a lot of time here on my blog talking about philosophical aspects of BI as I am personally more interested in technical issues and solving problems. But the apparent consensus as to the importance of social media bugs me.

The Internet is a wild place where rules are not always followed. If there is money to be made, then someone will figure out a way to abuse the system. It’s not just the “little guys” either, as evidenced by the way retailer JC Penney apparently took specific steps to trick Google during the holiday shopping season. Again, this was back in 2011.

What do you do with the information?

Does it do any good to listen to what is being said on social media without having an action plan to respond?

Do you really trust an external entity (such as Facebook) to host critical data?

Did you know that you can reportedly buy Twitter followers now? (Seriously, google for “buy twitter followers” and see what you find.)

There are rumors that Sarah Palin got caught setting up a secondary Facebook account, just so she could “like” herself and skew the results shown on her main page. This type of abuse – if performed manually – should have minimal impact. However it is apparently far too easy to set up bots that can be tasked to perform the same sort of task. In fact there are companies that you can legitimately hire (as opposed to going underground) to do this for you. One term I came across while researching background for this article was quite amusing: hactivist. :)

Is there a point to all of this rambling? Not really, I guess. Or if there is, it’s that despite SAP and everyone else appearing to really want to make social media relevant, I find myself asking why is it so important?

Human behavior – online or not – often boils down to risks and rewards. The problem is that rewards can inspire the wrong behavior. I talked about this in a guest blog at The Decision Factor: Lessons in Business Intelligence: Be Careful What You Wish For. The cost of setting up a web site today are extremely minimal. The ability to generate advertising revenue, however, is also very minimal. Suppose that it costs $10 a year to host a site and it makes $0.50 per month in revenue. It’s hardly worth doing, right? But what if you scale that up. Now it costs $1,000,000 to set up the sites, but you’re generating $50,000 per month in income. I can’t find a link at the moment, but there was some guy that was making millions of dollars buying expired domains and putting junk content on them.

By one estimate, the Internet will soon have more garbage than valuable content! Some might say that this has already happened.

That being said, there are certainly valid reasons to consider using social media. The recent (yes, this really is recent) phenomena of Sharknado proved that. ;)


Jul 30 2013

Pivot UserResponse() Values Into Rows

Categories: Report Techniques, Variables! Dave Rathbun @ 6:15 am

Several years ago I wrote a post that has generated a fair number of follow-up questions and comments. The post was related to splitting user response values onto separate rows and it used some basic string operations to do that. The important distinction is that the values were on different rows but remained in a single cell, which meant the output was suitable for a header cell.

Recently I got a comment that posed this question:

In one of my reports there is prompt to select the Fiscal Year and the user can select multiple LOVs. Prompt Name is “Year”. Say for example the user enters 2011,2012 and 2013. On using the function userresponse the report will show 2011;2012;2013

My requirement is to identify minimum value from the LOVs that the user has entered. In this case the report should show the mininum value as 2011. Can you please guide me on how to achieve this?

Continue reading “Pivot UserResponse() Values Into Rows”


Jul 25 2013

Updated Strategy Renders Schema Change SDK Tool Obsolete

Categories: Universe Design, VBA Tools Dave Rathbun @ 10:23 am

Many years ago when I first started working with Teradata we had a challenge. The DBA team had defined a standard where each table was tagged to indicate the purpose of the table. Specifically, and development table would be called PROJ_D.TABLE_NAME and the equivalent table in production was called PROJ_P.TABLE_NAME. Why do this? In a Teradata connection I connect to a server, not to a schema (or instance as Oracle would call it). One of the DBA strategies was to use the QA or “system test” hardware as a preliminary DR (disaster recovery) platform. That means they keep a copy of the tables in PROJ_S and the same table exists as PROJ_P on the same server. In order to have specific queries sent to the database I had to include the schema name (or prefix) on every table name in my universe. During DR testing I could reset my schema from PROJ_S to PROJ_P without moving the universe (it’s still on the QA server) and now I would be using the production tables.

While this did provide the advantage of being specific, it also presented a problem during migrations because I had to change my code. I first wrote about this back in 2008 when I shared a utility that I wrote to make this process easier. (Read the details in the Using the Designer SDK to Ease Migrations post.)

With the advent of BI4 the new challenge is that we don’t (yet) have an SDK. At the same time we have become a much more mature Teradata shop. Our DBA team recently introduced a new strategy that eliminates the issue altogether, meaning I don’t have to worry about either of the issues listed above.

New View Layer

Our standard for reporting says that our tools (Business Objects included) never access the source tables. As a result, the naming convention I described above was used on all of the views that I used in my universe. Assume we have views called PROJ_D.TABLE1, PROJ_D.TABLE2, and PROJ_D.TABLE3. In the “old days” I would have to update each of these tables when I want to migrate my universe out of the development environment. Our new strategy seems simple on the surface, probably because it is :) , but it solves both issues.

For each and every table that I am using we have now created a “semantic layer” view on top of the base view (which is on top of the base table). So for each of the above tables I know have this:

create view SEM_PROJ.TABLE1
as
select * from PROJ_D.TABLE1

The “PROJ” portion of the schema related to the project, so it remains as part of the view name. The “SEM” prefix is of course short for “semantic layer” and indicates that this view is used by some sort of external tool. What is missing from the new view name is the location tag (either _D or _S or _P for production). This seems like a very simple solution (and it is) but it took a while for us to get here. We have created the semantic layer views for one project so far, and it’s a real pleasure to be able to migrate a universe without touching it anymore. 8-) I anticipate we’ll be using this strategy for all of our Teradata projects from this point forward. Obviously the select clause changes in each environment, but finally I have a consistent view name in all environments.

When I have to reset my universe due to a disaster recovery exercise, it’s now up to the DBA team to re-point the semantic layer views to the proper schema. When I migrate the universe I no longer have to touch anything, except to perhaps change a connection. It’s a much cleaner process, and no longer requires me to be concerned about waiting for the full SDK to become available in BI4.


Jul 23 2013

Really Cool Stock Market Visualization

Categories: General Dave Rathbun @ 3:08 pm

I was looking over some financial sites the other day and ran across this visualization that is based on the stock market. The overall presentation is divided into blocks of companies grouped by sector. The size of the company within their sector is based on their market capitalization, and the color indicates the current stock price trend. Clicking on a sector allows the viewer to “drill” down into the sector data. At the bottom of the frame is a drop-down that allows you to specify the time range used to identify the up or downward trend, with options for since last close, 26 weeks, 52 weeks, or year to date. While the color (green or red) shows the direction of the stock price move, the intensity of the color shows the percentage of the move. A company like Barrick Gold which has dropped substantially during 2013 (down ~48%) is fairly bright red, while PepsiCo (up ~25% YTD) is a muted green.

I think this is one of the best uses of this visualization style that I have seen, and decided to share it.


Next Page »