Mar 21 2012

SAP Insider BI 2012 Presentations Posted

Categories: 2012 BI 2012, Conferences Dave Rathbun @ 10:21 am

I have updated my conference presentations page with links to download my two sessions from BI 2012 conference last month. The two sessions were:

Leveraging Report Variables for More Robust SAP BusinessObjects Web Intelligence Reporting
This session explores the role of report variables in SAP BusinessObjects Web Intelligence 4.0 and offers undocumented tips and tricks to exploit them for more creative and efficient ad hoc reporting and data analysis. Walk through system demos to understand what’s required to merge data providers in SAP BusinessObjects Web Intelligence, including tips to leverage the ForgeMerge() function to fix unbalanced data providers in a full client document. Obtain best practices for using SAP BusinessObjects Web Intelligence report functions and other context operations to enable calculations at different levels of granularity. Explore workarounds for displaying UserResponse() values on separate rows in a table, and see how this improves the user experience. Get techniques to optimize prompt handling, including insight into whether and how to create a prompt syntax that substitutes “today’s date” for a prompt default value.

The Web Intelligence session was actually delivered with the 3.1 rich client, not 4.0 as mentioned. The conference page lists a number of blog posts related to the content included in that session.

Universe Design Techniques Proven to Boost Front-End Performance
This session dives deep into universe development and examines when, why, and how to tweak your existing Business Objects universes for optimal report performance — and when you may need to build new ones. Explore proven techniques for extending a universe to ensure more efficient queries, an optimized end-user experience, and more timely and efficient BI operations. Acquire tips to perform index awareness, such as choosing a value from the list of values (LOV) directly within the query panel, rather than using prompts. Learn how to use aggregate awareness to set up complex logic and step through a demo to see how this results in significantly improved performance on the front end. Gain insight into whether and when to leverage shortcut joins to boost query speed. Explore universe design techniques that provide the best performance when pointing to a data source outside your Business Objects system. View detailed demonstrations of various advanced universe design techniques and leave with proven strategies for incorporating them into your own environment.

The Designer session included information that I have posted about on my blog before. However, in going through my older posts to build some links to those posts I realized that I’ve never really covered the aggregate awareness feature here. I hope to address that in the next few weeks.

Mar 02 2012

BI2012 Wrap Up

Categories: 2012 BI 2012, Conferences Dave Rathbun @ 1:37 pm

This conference has been flying by, primarily because I’ve been so busy. Today is the last day of the conference and since I woke up early I went to Eric Vallo’s session on high availability. Eric is a very entertaining speaker who also happens to know his stuff. After that I visited with a few folks on the way back to my room where I will be packing to head home as soon as I finish this post.

Wednesday morning I went to Alan Mayer’s session on how to perform a self-service system health check. I had not seen him in quite a while and it’s always good to catch up. If you weren’t aware of this, I used to work with Alan back in the Integra days, and Alan has launched a new venture called Solid Ground Technologies. Alan has been a regular at BI events for even longer than I have, and he always delivers great sessions. This year was no different.

I then spent Wednesday afternoon with Michael Welter preparing for our joint training session; we delivered a 3 hour hands-on session on the semantic layer to end the day. If you can imagine in three hours we covered setting up a connection, inserting tables, creating joins, building classes and objects, differences between dimensions and details, derived tables, measures, solving loops with aliases, solving loops with contexts, and even fan and chasm traps. Yes, in three hours. Just like running a marathon. ;) We repeated the session on Thursday morning to start what was my busiest day so far.

Thursday after the repeat of the designer session I got to spend the next hour talking about one of my favorite topics: BOB. They scheduled a room for me and allowed me to talk about the origin of BOB, some best-practices or tips for using BOB (how to search, among other things) but as the conversation went along we also talked about some of the challenges of running a large and active online community. I also talked about some of the new features that I hope we will be able to release within a few months. The crowd was small, but it was still a lot of fun for me to be able to talk about our progress over the last ten years. I would have been willing to do the same talk to an audience of one.

After that I had a quick lunch (food here is decent, by the way, and no problems at all finding water throughout the day unlike at another event I attended earlier this year). I went back up to my room to test and then reset all of the demonstration queries for my afternoon session on tuning universes. I covered index awareness (which I have blogged about before), shortcut joins (also a prior topic from this blog), and finished off with aggregate awareness. I will be posting the presentation slides after I get home, and will be adding a blog post (or two) related to aggregate awareness to eliminate that gap. That session was a lot of fun, as it included a lively discussion with audience members as they peppered me with questions throughout the hour.

One interesting difference between this event and other events where I have presented is the timing: the sessions here are designed to be an hour of lecture followed by a 15 minute question and answer period. As I was preparing my presentations for this event it really helped me to know that I could include extra content; so many times I have had to cut out important or useful information just to fit inside a one hour (or even fifty minute) time window. As a presenter I found that to be a nice change, and in the few sessions that I was able to attend I didn’t mind that they ran a little bit longer.

Overall I enjoyed the event. I was told that there are about 1,800 folks that are attending (although some of those are cross-over registrations, meaning they registered for a different event other than BI2012 but their conference pass allows them to attend all of the sessions.) The hotel was nice, although it was a long walk to the event location, and the walk included a pass through the smoke-filled casino area. That wasn’t always pleasant, but it’s a part of doing business in Las Vegas.

Finally, an amusing (at least to me) story to end the week: while I am here I am also still trying to keep up with work back home. At one point I was getting extremely frustrated with the hotel Internet access. I had established my connection and started up VPN. I was trying to import a universe, make a few small changes, and then export it again. The problem was the import process kept timing out. I finally was able to get the universe imported and make the changes but was a little hesitant to try to export after the time-out failures from earlier. I didn’t want to risk having the export process interrupted (this is a fairly large universe).

Finally I decided to risk it, and I exported the universe.

The export completed successfully.

Which proves that what happens in Vegas does not, apparently, have to stay in Vegas.


Feb 29 2012

BI2012 Day 1 Wrap Up

Categories: 2012 BI 2012, Conferences Dave Rathbun @ 10:41 am

Yesterday was a busy day! I flew in to Vegas on Monday night, and arrived too late to visit the registration booth. I met Michael Welter and he watched me eat while we talked about plans for the conference. Later I got a chance to visit with Steve Krandel as well.

I got my registration taken care of the next morning with minimal fuss, had a quick breakfast, and then made my way over to the keynote speech which was going to be given by Steve Lucas and Timo Elliott. It was the first time I remembered hearing of them speaking together, and they confirmed that during the keynote itself. Unfortunately I was going to have to leave the keynote early, as Michael and I had to go review the configuration of the laptops in the room for our training class at 9AM.

The topic on the keynote was big data, which is probably not a surprise. SAP has been using the big data concept to push HANA (to his credit, Steve waited until about 10 minutes into the talk before mentioning that product name ;) ). He also took a more unique approach to demonstrate the growth of big data. In many sessions like this the presenter has focused on a graph that shows the amount of data being generated every week / day / hour / minute of our lives. That’s all well and good, but it doesn’t mean that I have to process all of that data. Who is really going to analyze all of the bytes of every photo ever uploaded to Flikr for example? Instead of doing that, Steve showed a graph based on research from LinkedIn and other sources showing the documented growth of big data jobs. These would be jobs with titles like Data Scientist and so on, and there was a clear growth in this area over the past few years along with a major spike in 2010. Interesting approach.

Steve then introduced the typical “Three V’s” of big data, those being Velocity, Volume, and Variety. At this point Timo stepped in and suggested that a fourth “V was equally important, that being Validity. It doesn’t really matter how fast your data comes in, how much data comes in, or how different the data is if it’s not valid. That’s a fair point, and also serves as a selling point for the data services from SAP which come with various data validation tools. I don’t think that point was lost on the audience. ;)

Steve then went on to show a list of a variety of open source tools used to ingest, process, store, and then analyze big data. There were quite a few tools that I had heard of before (Hadoop, Voldemort, others) but also some tools that I had not yet heard of. Many of those tools are in use in large enterprises today.

However, these open source tools – as great as some of them are – were not designed to integrate with each other. Now we’re starting to see where this talk is going as Steve shows a slide that shows how SAP provides tools for the same four steps (ingest, process, store, and analyze) for handling big data sources. This is about when HANA made her first appearance, I think.

Now that HANA was on the table, Steve was able to start talking about real-time processing of big data. He used an analogy that seemed appropriate at the time but later as I started thinking about it, maybe not so much. He showed a picture of a person looking at traffic on a very busy street. That person wanted to cross the street, and was taking in all of the data points including traffic flow patterns, weather, sounds, smells… basically a huge amount of data was being collected as the person was standing on the side of the road.

Steve then said, would you make your decision to cross the road if your data was five minutes old? What about 60 seconds old? Even five seconds?

The point, obviously, was you needed up to the second, or actual real-time data in order to make that decision. That’s a fair point in a life or death situation like crossing a very busy street, but I’m not sure it works when talking about big data quite as much. For example, what about the volume of the data flow (one of the three V’s mentioned earlier)? If I have been looking at historical data (say for the past 24 hours) then I have some idea of what the traffic flow patterns are going to be. There is not nearly as much traffic at 3AM as there is at 9AM, and thats useful information too. But I am going to maintain that there are times when real-time is important (my speedometer in my car) and times when it’s not as important (fuel gauge). As long as I know my fuel gauge is updated every fifteen minutes that’s accurate enough for me to determine when I need to stop for gas. But I don’t want to have to explain to a traffic policeman that I didn’t really know how fast I was going through that school zone since I had just exited the highway… :lol:

The point is, real-time data has a place, but I don’t think that all data has to be real-time data in order to be valuable. (Is that the fifth V?)

At this point I had to leave the keynote, but I know they were webcasting it so perhaps it’s available online for me to go back to later.

Michael and I found that our virtual machine image was working fine, so I think we’re prepared for our training session (which will come later today.) I delivered my first talk of the week about Web Intelligence variables (my first time ever on that subject! :lol: ) which seemed to go well. Later I attended a session given by Timo during which he shared a lot of statistics and case studies about analytics, including one about new paint on big boats that was designed to be too slippery for marine life to attach to but had the pleasant side effect of making the boats more fuel efficient as well. It was also good to catch up with Timo again; he says he has now been with Business Objects for twenty years. :shock:

At the end of the day I participated in an Ask the Experts session. I shared a table with Tammy Powlas, and we had conference attendees drop by with their questions. The session ran for an hour and a half, and our table always had people there, so I assume we were popular. In fact the last group of folks I talked to went beyond the scheduled closing time and they started to turn out the lights! :lol: Eric Vallo did a drive-by but didn’t stop to chat as the table was full.

So what are my thoughts on the event so far?

Everything has been running smoothly and the folks here seem to appreciate the content. WIS made a specific effort to include a lot more “classic” Business Objects content this year so I’m seeing a lot of familiar faces. (Haven’t mentioned Jamie yet, but he’s here, and I expect to see Chris later today. Also had dinner with Alan Mayer (founder of Integra Solutions and now Solid Ground Technologies) last night.) My schedule has kept me fairly busy, but I hope to attend more sessions in the coming days, specifically around mobility. I’ll be back with details.

First post from a new laptop. Browser doesn’t have spell check, so hope everything came out okay…