Super User

Building an analytics platform is about more than extracting data from your business application, it also needs to be transformed into information that is meaningful for the user. This is traditionally part of the process of engineering your data warehouse, deciding what data marts are required and engineering some code to transform the raw data from the business application into these more meaningful data objects. It’s an expensive and difficult process, requiring significant engineering skill and knowledge and is almost impossible to get right first time.

While this approach does have advantages, and there will always be a place for data warehouses in analytics, they can be cumbersome, expensive and a real barrier to agility. It also ignores the fact that more often than not you have already built the logic to transform data, often multiple times, in the application itself. Take one of our biggest partners Infor. The distribution industry survives on margins, so flexibility on the calculation of pricing is paramount. That’s one of the reasons Infor’s distribution system is so popular, it has a hugely flexible pricing module. This means however calculating prices is complex, and requires the application of a significant amount of business logic. That logic has already been written of course, in a stable and robust ABL module that is called from reports and screens across the application. Surely then, our analytics solution should make it simple for us to just call that logic, not force us to re-engineer and maintain it on another platform elsewhere?

We think to achieve a really successful embedded analytics solution integration of business logic at the back end is essential. Not only does it improve agility and save time and effort, but also ensures both the business application and analytics platform show the same figures, calculated as they are by the same code. That’s why we’ve engineered DataPA OpenAnalytics to natively call any ABL business logic at the back end, by calling ABL functions to add calculated values, or going further and running ABL code to receive the entire dataset. We believe in empowering our partners to unlock the hugely valuable asset they have in their robust ABL logic. Allowing them to deliver beautiful, live intelligence when and where their customer needs it.

If you would like to find out more about our partner program, please get in touch. 

A couple of weeks ago, our CEO Gary had a new heating system fitted, with the new tado° smart thermostat. It’s pretty impressive, carefully reducing the temperature to an optimal level as soon as the last person has left home and calculating how to pre-warm the home most efficiently for your return. It also monitors the weather and claims to cut overall heating costs by 31%. Even more interestingly, tado° announced earlier this year an open API. This got us thinking. 

Our developers are not known for their ability to focus on the mundane when a new bit of kit is available in the office, so true to form a couple of them dropped what they were doing and started hunting the web for details on how to access the API. A little bit of fiddling about at the backend and they had it hooked up to DataPA OpenAnalytics. Now we can all keep an eye on the temperature of Gary’s house online and via the DataPA OpenAnalytics mobile app (take a look). 

This is all very entertaining, but does illustrate a more useful point. The combination of IOT devices and analytics is set to change our world in many unexpected ways. An analytics tool like DataPA OpenAnalytics will quite happily accumulate data from these devices, display it in any imaginable format and on any device, and raise alerts should any interesting threshold be met. For Gary, he can keep track of his heating system, and receive an alert should the temperature reach a value that indicates an issue. With a few more IOT devices in the home, he could combine that with information on his fuel consumption and the cost of heating fuel. Surely this will offer opportunities to further improve the efficiency of heating his home, perhaps in months to come a few companies offering a monitoring and tuning service? 

It’s clear these devices and modern analytics tools are opening up a world of opportunity for new innovation. Our errant developers have now been tasked with building an interface to make it simple for anyone to hook up DataPA OpenAnalytics to these devices and services without having to tinker with the back end. We can’t wait to see where this leads us. 

If this is the first you’ve heard about DataPA OpenAnalytics, why not find out more at datapa.com.

Our Enterprise server offers the perfect platform to pull data from an OpenEdge app and distribute it as actionable intelligence to any device across the web. Many of our customers already use it by hosting a server themselves. However, for smaller organisations, the cost required to host and maintain their own server makes it a much less compelling offering. Which is a shame, because often these customers would benefit the most from our technology. So, for some time now, we’ve been pondering on the best way to offer DataPA OpenAnalytics as a hosted service. The hard part was how to connect the largely on premise OpenEdge databases, securely and efficiently across the web.

The AppServer Internet Adaptor (AIA) was the obvious answer, but it has always been time consuming and difficult to implement, which kind of defeats the object of offering a hosted solution.

So it was, a little over a year ago, that we became pretty excited when we first heard of the Pacific AppServer. Built on Apache and Tomcat, it was billed to offer a self-contained, stand-alone AppServer module that could be easily distributed to any platform supported by OpenEdge. This sounded like the perfect solution. Roll forward a few months and DataPA OpenAnalytics now includes native support for the Pacific AppServer, and we’ve spent a few weeks testing it out. Taking the simple and secure connectivity across the web as a given, the most exiting benefit from our perspective is the performance. We saw queries on average take less than half the time they took via the AIA.

So we have to give a big thumbs up to Progress. In our opinion the Pacific AppServer could be the biggest game changer the OpenEdge platform has seen for some time. For us, it means in the coming months we can offer companies with an on premise OpenEdge app a way of securely distributing their information to any device anywhere, with very little effort or cost. We’re looking forward to watching what others make of it.

Gone are the days when analytics solutions were simply a window on our business applications. With the proliferation of mobile technologies and the introduction of alerting, analytics solutions can now offer huge opportunities for change within an organisation, driving business agility and changing the working landscape forever. However, to take part in this revolution, your chosen analytics solution must have two very important attributes.

First and foremost, it must be real time. It’s no good alerting a sales representative to an up selling opportunity ten minutes after the customer has left. Which is why your analytics solution has to be able to process analytics directly against your operational data. Any solution that requires data to be moved off platform will always introduce a delay, and degrade the solutions ability to provide true real-time responsiveness.

Secondly, the solution must be truly self-service. Business agility is about adapting to change, and nothing stifles an organisations ability to adapt more than the combination of over prescriptive information technology and over-burdened IT departments. An analytics solution should allow anyone in the organisation to ask new questions and configure new alerts quickly and easily, in a single platform without having to configure several layers of technology.

At DataPA, we believe analytics should empower employees to adapt their working practice as and when they choose, driving business agility and competitiveness. If you’ve never looked at DataPA OpenAnalytics, why not come by our website and take a look.

Workflow is a key component for any legal practice – increasing efficiencies, improving customer service and coping with evolving regulatory requirements – which is why legal case management has always been a key focus area for software developers.

The market leader in legal case management is Lexis® Visualfiles from LexisNexis. The Visualfiles “toolkit” allows organisations to expand the standard solution, adding their own entities and workflows to match any business requirement, automating even the most complex processes. Today, Visualfiles is the most widely used case and matter management system in the UK with more than 25,000 registered users in firms ranging from 5 to well over 1,000 employees.

This “ultimate flexibility” was proving to be a particular challenge for LexisNexis to provide an embedded analytics solution to their customers. For most business applications, the process of transforming raw data in the database into meaningful information for an analytics solution is the same for every customer. Everyone uses the same system so it can be easily understood, designed and implemented once for all customers. However, with Lexis Visualfiles this is not the case. The unique power of Visualfiles allows each customer to evolve their system, and by definition the underlying data set, to match their specific business needs. Whilst this provides fantastic flexibility to ensure the system evolves as the business develops, it creates a huge challenge for analytics.

However, at DataPA we understand that application developers have already designed and implemented this transformation process, otherwise the business application would be of little use. We believe developers should be able to reuse their valuable business logic assets for analytics, not be forced to reengineer them for another platform. So with DataPA OpenAnalytics, the LexisNexis development engineers were able to reuse the existing OpenEdge ABL code to deliver beautiful, accurate, live analytics embedded seamlessly into their application.

The result is the best of both worlds – a powerful business solution married to sparkling analytics – so everyone wins.

If you have equally valuable business logic developed in OpenEdge, why not talk to us today to find out how you can leverage this valuable asset to deliver beautiful, live intelligence to mobile and web.

Look up pretty much any survey on IT priorities over the last few years, and analytics is consistently number one. Not only that, the number of decision makers reporting that it is their highest priority is growing year on year.

Why? Well we think the major reason is modern analytics offers real disruptive change for organisations, rather than just the iterative efficiencies afforded by traditional BI. This change has been driven by the development from passive reporting to technology that allows users to actively discover, share and collaborate with insight about their organisation.

In our presentation at PUG Challenge EMEA in Copenhagen next week, we’ll explore these ideas further and show how some of our customers are using our technology to radically change how they do business. Here at DataPA we believe this is just the start of a hugely exciting cycle of innovation in analytics, offering huge potential for us and our partners. We’ll also discuss the innovations we’re introducing to our software in the next few months and beyond. Innovations that we’re convinced will keep us and our partners at the forefront of this revolution. We’d love you to join us.

 

 

A reoccurring point of discussion as I visit our customers is the role of traditional printed reports in business intelligence. Like most BI vendors, we have always delivered a traditional report designer as one option for visualizing the intelligence DataPA OpenAnalytics generates. However, for a good number of years we’ve concentrated our development efforts on dashboards, and ways of delivering them to an increasing array of devices. Our reasons are simple. We believe that pretty much any business function can be better supported with a live, interactive visual display of information rather than a static printed document.

So I’m always surprised at how many of our customers, even new customers, still rely heavily on our traditional report designer for their business functions. In the last few months, as I’ve visited and spoken with our customers, I’ve begun to ask why.

What’s clear is in almost all cases the decision to choose a report is based more on habit than any clear, reasoned argument. For example, a common response is the need to take a report to a meeting for discussion. But surely, the same information on a tablet, where it was possible to explore the data behind figures with colleagues would be more useful?

Today, with the proliferation of mobile devices and internet connectivity, there are very few situations where static printed documents are a better solution than visual, interactive dashboards delivered to our desktop or mobile devices. As a rule, I would suggest if there is a legal reason to share or print a document, a report is appropriate, otherwise why not consider a dashboard that can deliver live intelligence pretty much anywhere.

For our part, whilst we’ll continue to support our customers who choose reports, we’ll focus our efforts on developing dashboards that deliver live, interactive intelligence wherever and whenever it’s required.

There has been a lot of discussion lately that data lakes will transform analytics, giving us access to a huge volume of data with a variety and velocity rarely seen in the past. For those of you who don’t spend your days trawling analytics or big data blogs, the concept of a data lake is simple.

With a traditional data warehouse, the repository is heavily structured, so all the work to convert the data from its raw structure needs to be implemented before the data enters the repository. This makes it expensive to add new data sources and limits the analytics solution so only known, pre-determined questions can be asked.

Object store repositories like Hadoop are designed to store just about any data, in its raw state, with little cost or effort. As a result, it becomes cost effective for organisations to store pretty much everything, on the off chance it might be useful at a later date.

The advantage from an analytics perspective, is a data lake gives access to a much vaster, and richer source of data that facilitates data mining and data discovery in a way that is just impossible with a data warehouse. The disadvantage is the lack of structure provides real challenges for performance, data governance and providing context within which less technical users can be self-sufficient.

These challenges need to be met by those of us that design and build analytics solutions. Here at DataPA, we’ve spent years building a platform that facilitates data governance and context in a live data environment. With our technology and experience there are few companies better placed to take advantage of this new opportunity. Like most new developments, data lakes will not be a golden bullet to solve all analytics requirements. However, we do think they have a significant part to play in the future of analytics and can’t wait to see what opportunities they bring for us and our customers.