The Great User Challenge
You’ve drunk the Cool Aid, caught the dream, understand the vision. After many months of consideration and research, brainstorms and meetings with stakeholders, you’re ready to launch into analytics and bring your organisation into the Information Age. You know it’s going to take time, but you’ve invested in your leadership team and are building a data driven culture, backed by a cohesive data driven team. Funds are set aside for the change, and you’ve now become one of the agents for positive change in a data driven world.
So now you ask: Where to start?
Todays post is about answering this question and I’m going to look at three areas:
The User Concept
As always, the start of the answer is a discussion about the concept of Information Management and Analytics, both of which feed Business Intelligence.
Conceptually, Information Management is about people. In fact, if you abstract it a bit further, information itself is about people. From the perspective of business, all information is gathered, analysed and distributed to increase the efficiency and effectiveness of business. From this perspective, the discussion quickly becomes about how a user can measurably benefit an organisation, rather than being a problem which needs to be solved.
To further distil this concept, I offer this definition of a user:
Any person who regularly interacts with an information system.
There is no distinction here about internal and external users, nor is there any reference to automated trawlers of an information system (also known as bots). This is deliberate as an information system should to be built to provide a seamless connection between customers, clients, employees and managers. When any of these groups (or malicious users) choose to use automated methods to trawl information, the information system should be capable of handling this.
For many organisations, this concept is not new. It’s been used successfully in customer service, product selection and is a straightforward application of the laws of supply and demand. However, the application of this concept to information systems is almost unheard off.
Consider this: Almost every organisation in the world (outside of tech paradises like Silicon Valley and New York) uses Microsoft Word, Excel and Powerpoint to store, write and analyse information in various formats. Even when an organisation uses a backend system like SAP to start combining their information stores together, often times this is copied and pasted out of the ‘Datamart’, transferred into one of these applications and then shared via email.
Take a few moments to consider the time inefficiencies this introduces to workflow. Each point of interaction introduces delays, replication and error, all of which are considered necessary by the users of the system to do their job. As multiple departments get involved, the situation continues to get worse. Before you know it, people are creating spreadsheets to store their own little information empires (I call them spreadmarts), introducing macros…the list goes on.
The User Framework
In contrast today, I present a more excellent way. I call it ‘The User Framework’.
The User Framework is based around three assumptions:
There’s some interesting points to be drawn from this:
Firstly, any information system which uses this framework needs extra storage capacity. This is not actually a problem, and I’ll cover why in the technical part of this post.
Secondly, such an information system will change over time. Tracking what a user does and then analysing this to find efficiencies can provide some pretty radical results. For instance, when you find that a user is accessing a particular information daily, you can drastically increase efficiency by surfacing this information sooner, reducing search time. This is not to mention the efficiencies in being able to preprocess results.
Thirdly, this kind of information system creates an iterative improvement framework, where it gets better and better over time. This is important as it allows an organisations competitive advantage continue to improve.
This kind of system is incredibly empowering and enabling for organisations. Over time and with the right team, the quality of data will improve as will the analytics being performed. Furthermore, as the analytics team continues to go deeper into process flows and outcomes, the requirement for information will be refined, leading to a higher quality of business intelligence.
A post like this would not be complete without some mention of the technical aspects of such a proposal. I will state up front that I receive no commission from these mentions; these are just tools I have used and/or know and am impressed with.
In mentioning these things, I want to be clear that each organisation will take a different path to Information Management. I’ve worked with companies who outsource almost all of this stuff, through to ones who choose to do it entirely in house. I have also personally worked on building aspects of these systems, and it takes a while. As such, the mix of products you choose to do these things will be individual - however, I also believe the products mentioned below will meet almost every organisations needs.
I have also witnessed some pretty poor advice being given by software/hardware vendors in the past. Any organisation or IT staff who claims that increasing storage capacity or server power is difficult is not telling the truth. If you get this, find a good company to help you (like mine)!!!
Storage - Red Hat
This is becoming more and more of an issue. Many organisations are stuck using proprietary systems which charge astronomical amounts of money for adding storage and server capability. In my honest opinion, a better way forward is to look into the many products Red Hat offers. Not only is their code open sourced, they have a long history of figuring out ways to take legacy programs and import them with no loss of data.
Web Backend - Django or Ruby on Rails
Most organisations around the world are starting to realise that the most efficient way to create this kind of information system is using an internal web server of some kind. Both Django and Ruby on Rails are open source with massive communities.
Analytics Language - Python
There’s a lot of debate about this in the analytics world. My personal preference is Python, simply because its so broad. Furthermore, Python integrates with Red Hat and Django seamlessly, which for an end to end solution is pretty powerful.
Server Stuff - Red Hat Linux
I love analysing data. I've done it for nearly 10 years now in various shapes and forms, and for me it's an endless world of wonder. There's nothing else I'd rather be doing!