Items filtered by date: July 2015
Items filtered by date: July 2015

A reoccurring point of discussion as I visit our customers is the role of traditional printed reports in business intelligence. Like most BI vendors, we have always delivered a traditional report designer as one option for visualizing the intelligence DataPA OpenAnalytics generates. However, for a good number of years we’ve concentrated our development efforts on dashboards, and ways of delivering them to an increasing array of devices. Our reasons are simple. We believe that pretty much any business function can be better supported with a live, interactive visual display of information rather than a static printed document.

So I’m always surprised at how many of our customers, even new customers, still rely heavily on our traditional report designer for their business functions. In the last few months, as I’ve visited and spoken with our customers, I’ve begun to ask why.

What’s clear is in almost all cases the decision to choose a report is based more on habit than any clear, reasoned argument. For example, a common response is the need to take a report to a meeting for discussion. But surely, the same information on a tablet, where it was possible to explore the data behind figures with colleagues would be more useful?

Today, with the proliferation of mobile devices and internet connectivity, there are very few situations where static printed documents are a better solution than visual, interactive dashboards delivered to our desktop or mobile devices. As a rule, I would suggest if there is a legal reason to share or print a document, a report is appropriate, otherwise why not consider a dashboard that can deliver live intelligence pretty much anywhere.

For our part, whilst we’ll continue to support our customers who choose reports, we’ll focus our efforts on developing dashboards that deliver live, interactive intelligence wherever and whenever it’s required.

There has been a lot of discussion lately that data lakes will transform analytics, giving us access to a huge volume of data with a variety and velocity rarely seen in the past. For those of you who don’t spend your days trawling analytics or big data blogs, the concept of a data lake is simple.

With a traditional data warehouse, the repository is heavily structured, so all the work to convert the data from its raw structure needs to be implemented before the data enters the repository. This makes it expensive to add new data sources and limits the analytics solution so only known, pre-determined questions can be asked.

Object store repositories like Hadoop are designed to store just about any data, in its raw state, with little cost or effort. As a result, it becomes cost effective for organisations to store pretty much everything, on the off chance it might be useful at a later date.

The advantage from an analytics perspective, is a data lake gives access to a much vaster, and richer source of data that facilitates data mining and data discovery in a way that is just impossible with a data warehouse. The disadvantage is the lack of structure provides real challenges for performance, data governance and providing context within which less technical users can be self-sufficient.

These challenges need to be met by those of us that design and build analytics solutions. Here at DataPA, we’ve spent years building a platform that facilitates data governance and context in a live data environment. With our technology and experience there are few companies better placed to take advantage of this new opportunity. Like most new developments, data lakes will not be a golden bullet to solve all analytics requirements. However, we do think they have a significant part to play in the future of analytics and can’t wait to see what opportunities they bring for us and our customers.