Feb 17 2016

“Working As Designed”

Categories: IDT, Rants Dave Rathbun @ 12:19 pm

Those three little words: “Working as designed.” I hate them.

I don’t know why it took me so long to experience this, as I found this topic on BOB that brought the issue to light years ago. That being said, here’s what’s going on…

In Universe Designer (UDT) a designer can change the state of an object from visible to hidden without impacting a report. I use this technique to create objects that have a specific purpose but keep them out of general circulation. For example, I will often create a class called “Report Objects” that contains objects with special exception handling or a unique purpose. That class is visible in the development environment only, which means my report developers have access to this class. Before the universe is migrated to the QA (and production) environments that entire class is hidden.

This allows me to create report-specific objects without risking an ad-hoc report using one of them incorrectly. It’s a great technique.

A secondary reason why I use the “hide” option is to retire old objects. I don’t want to break reports, but I don’t want any new reports to be created with these objects. Hiding them means existing reports continue to work.

In Information Design Tool neither of these strategies will work. Once an object is hidden, any report that includes that object fails. :evil: Based on information from SAP, that’s the expected behavior. They acknowledge that it’s a change in behavior, but so far I have not found the reason for the change. The bottom line is that it’s “working as designed” and will not be fixed.

Keep this in mind when you consider converting your UNV universes to UNX, as if you’re using the hidden object trick for any of the reasons I outlined above (or others that I may not have considered) that technique will fail in IDT.


Nov 05 2015

Downgrading from Multi-Source to Single Source

Categories: IDT, Universe Design Dave Rathbun @ 10:01 am

I mentioned a few weeks ago that I had found a way to “downgrade” a multi-sourced universe to a single-source. I wanted to create a beautiful blog post with screen shots to walk everyone through the process, but have not managed to carve out the time. :oops: Rather than continue the radio silence, I’m going to write up the steps and hopefully come back and make this post prettier later on. I also have a request to make which will be at the end of this blog post, so please read to the end, and if you can help, please do so.

Background

We, like I am guessing many Business Objects legacy customers, have been slow to dip our toes into the “multi-source” waters offered by Information Design Tool (IDT). When we did our major upgrade to BI 4 a few years back, we evaluated whether we needed to convert all of our UNV universes to UNX, and decided not to, for a variety of reasons. But for new development we are typically looking at IDT as our universe creation tool. Continue reading “Downgrading from Multi-Source to Single Source”


Oct 12 2015

Yes, Virginia, You Can Switch a Multi-Source Universe Back to Single Source

Categories: IDT, Universe Design Dave Rathbun @ 10:07 pm

What’s the first decision you have to make when creating a new universe with the Information Design Tool (IDT)? You have to specify if you want a single-source or multi-source data foundation. Once that selection is made, it cannot be changed.

Well, sort of.

We had an odd performance challenge with a particular universe. It seemed that when it was created, the developer thought they might want to eventually (perhaps) use multiple sources, so they went ahead and created a multi-sourced data foundation. But the project never ended up needing a second data source, so for over a year they’ve been using a single-source multi-source universe. (Did you follow that?) As a diagnostic tool, I thought about recreating a new universe as a single-source data foundation and resetting the various reports and dashboards so they would use the new universe. That would have been a lot of work, and with no guarantee that it would fix anything, much less have an impact on the issue.

Then I wondered to myself, what if I could figure out a way to “downgrade” the existing multi-source universe to a single source? That way I could still test my theory that our data federator engine wasn’t working as well as it should without having to re-point each report and redo each dashboard.

“spoiler alert” … it worked. :) I was able to convert a multi-sourced universe to a single-sourced version without impacting the reports and dashboards, and we’ve been running on that version ever since.

How was it done? Well, I’m working on a presentation for the local DFW ASUG chapter meeting this coming Friday, and once I have that done I’ll post the details here as well. You’ll just have to wait a few days. :P

Update: I won’t be presenting at the DFW chapter meeting after all, but I will still be posting this solution soon, hopefully next week.

Secondary note: I have also done the process in reverse… converted a single-source universe to a multi-sourced version, but with significantly more work involved. If you have a large number of reports, however; it may be easier to rework the universe and not have to re-point every report. Time will show if I’m successful or not…


Aug 29 2013

BI4 UNV Versus UNX … Which Do You Choose?

Categories: IDT, Universe Design Dave Rathbun @ 7:54 am

When SAP released BI4 several years ago it featured a major upgrade to one of the core technologies used by Business Objects since the beginning of the company: the universe. What does this mean for you and how does it impact your intentions to move forward with the latest and greatest offering from SAP? Many of you know that I currently work for a fairly large company, and large companies are often slower to move on to new technologies as they’re released. I have not talked a lot about BI4 in my blog yet primarily for that reason. However, we’ve had over a year to review the new Information Design Tool (IDT) and the BI4 .UNX format, and I’m finally ready to share some thoughts. Continue reading “BI4 UNV Versus UNX … Which Do You Choose?”


Dec 08 2011

Why Context Matters: Blizzard Is More Than Weather

Categories: Products, Text Analytics Dave Rathbun @ 12:44 pm

I was checking the weather this morning and noticed that weather.com now offers a social media component to their web site. It seems that if I am so inclined, I can see what other folks in my area are saying about the weather. Without doing much, well, any research I am guessing that they’re simply looking at the location information that can optionally be provided on tweets and then scanning for certain weather-related keywords.

Here’s a screen shot of the fail I noticed. Have a look at some of the tweets.

screen shot of weather tweets image

How many of them are about the weather versus something else?

First I see a person from Garland (not far away from me) who is tweeting what appears to be various national headlines, including one about the real estate situation in Florida. Apparently there are “clouds on the horizon.” Does that have anything to do with weather in my area? :lol: No, but it does have a key word “clouds” included.

I like the next example even more. We certainly have clouds here in Texas, but I can’t remember the last time we had a blizzard. Yet someone from Lewisville, Texas, just a few miles up the road from me, is tweeting his disappointment about being left out of (again I’m assuming) a beta program for the game company Blizzard Entertainment and their next incarnation of the Diablo game series. Yes, there is a weather-related keyword in that tweet, but would it not make sense to tie key words to geographical areas? The odds of having a blizzard in Texas (the weather kind, at least) are slim.

This is part of what makes text analytics so difficult. Business Objects purchased a company several years ago (Inxight) that delivers text analytics; this product is now a part of the Data Services product line. It would be interesting to see if they have a feature that would allow me to tie geo-location services to keywords so that I could discount tweets mentioning blizzards in Texas, or hurricanes in Alaska.

Ultimately it comes down to context. When I read those tweets, I can immediately see that they’re not really talking about the weather, even if they do have weather-related keywords. Apparently it’s still challenging for software to do the same thing. Then again, it’s hard enough to predict the weather correctly, so maybe I can forgive them a few errant tweets. ;)


Oct 05 2011

Web Intelligence on iPad – Not Ready For Prime Time Yet

Categories: MOBI Dave Rathbun @ 1:51 pm

Earlier this year I got to see Web Intelligence 4.0 documents on an iPad. It was one of my few tweet-worthy nuggets that I shared from the BI 4.0 launch event. A few weeks ago I started building out some reports for a co-worker so we could experiment with MOBI and see just how well it worked in our environment. Along the way I picked up a copy of the “known limitations” document from SAP. Some of them are big, perhaps even show-stoppers for many folks. Continue reading “Web Intelligence on iPad – Not Ready For Prime Time Yet”


Sep 27 2011

HANA Like An iPod? More Like A Digital Camera…

Categories: General, HANA Dave Rathbun @ 9:51 am

Timo Elliott published a great blog post this morning:

Why In-Memory Analytics is Like Digital Photography: An Industry Transformation

Timo is an avid photographer as well as a BI evangelist, and in this post he combines his knowledge of both, making some excellent points along the way. It’s well worth hopping over to his blog to check it out.


Jul 26 2011

HANA – By Any Other Name

Categories: HANA Dave Rathbun @ 2:02 pm

There has been some confusion around HANA the product, but also around HANA the name. Originally it was an acronym, but it isn’t anymore as detailed in this blog from SCN that clarifies just what HANA means as far as the product name goes.


Jul 19 2011

Still More HANA: Report from DFW ASUG Chapter Meeting

Categories: ASUG Chapters, HANA Dave Rathbun @ 8:58 pm

A few weeks ago I attended the quarterly meeting for the Dallas/Fort Worth ASUG chapter. I didn’t get to stay for the entire day, but I did get to hear the keynote by Dr. Jeffrey Word about HANA. The talk was less about the technical aspects of HANA and more about the genesis of the idea. He started with a very interesting comparison. It seems that HANA is SAP’s iPod. Continue reading “Still More HANA: Report from DFW ASUG Chapter Meeting”


Jun 15 2011

SAPPHIRE 2011 Wednesday Keynote – HANA, HANA, and More HANA

Categories: 2011 Annual Conference / SAPPHIRE, HANA Dave Rathbun @ 12:14 pm

Author Note: I realize that SAPPHIRE is old news by now, but I felt this post still had enough to offer that I would finish and publish it.

As a technical guy myself, I tend to prefer the SAP Business Objects conference or SAP TechEd over SAPPHIRE, mostly because I find more technical content at those events. However, the Wednesday keynote address from Vishal Sikka and Hasso Plattner of SAP certainly gave me plenty to chew on from a technical perspective.

Vishal Sikka

Vishal kicked off the keynote talking about HANA, and continued that theme throughout his entire (long!) presentation. In a prior post about the conference I answered the question, “what is HANA, exactly?” very simply: HANA is a database. It can be presented in a number of different ways, but ultimately that’s the function that HANA provides. I don’t install HANA to provide new functionality. In order to do anything with it, I need what I have started calling “HANA Plus One” instead. The “plus one” can be Web Intelligence, Xcelsius, or any other query tool. It can also be application code. HANA is an accelerator or an enabler. With HANA I can do the same things I did before but much faster. Or quite possibly I can now do something I wasn’t able to do before because the process took too long. (True story: A very long time ago I was asked to optimize a daily report that was taking 20+ hours to run. By the time the report was finished it was too late. With a few report tweaks and one additional database index I got the report down to 20 seconds.) Continue reading “SAPPHIRE 2011 Wednesday Keynote – HANA, HANA, and More HANA”


Next Page »