DESKTOP USER MONITORING. DOA

As the interest in end-user adoption analytics continues to grow, I have repeatedly been asked which data streams should be analyzed to improve user performance. Or put another way, most organizations have so much data today, which data streams are better for improving user efficiency and consumption. Obviously the usage stream of end-users is critical, but most organizations today do not have an accurate and reliable source of this data.

Often the conversation turns to monitoring users at the desktop, a usage stream that held such promise 10+ years ago but that has failed to power successful user programs. After being attempted by hundreds of companies, and initially being supported by several application vendors, desktop based user monitoring has very few success stories and more importantly is being made irrelevant by powerful trends in end user mobility, IT operations and big data analytics. Today, looking back it is clear that this data collection model was bound to fail – here’s why. 

#1:  Cannot capture 100% of targeted users

Desktop based monitoring fails to cover enough enterprise users. Too many unmonitored users undermines the confidence in user analytics and makes service improvement programs impossible to implement. Users outside your company, missed. User with RF devices in warehouses, missed.  Employees with bring-your-own devices, missed. Sadly, the number of users invisible to a desktop centric monitoring approach is exploding as major applications like SAP and Oracle ride the trend of employees accessing technology through mobile devices. 

 #2:  IT administration overhead is punitively high

The concept of deploying monitoring software onto each and every employee desktop sounds quaint in a time when technology is increasingly shifting to the cloud. Altering desktop images, redeploying agents as employees replace laptops, installing servers, administering compression software required to prevent unnecessary network load - it is clear why desktop based monitoring was always going to be problematic with IT operations. Today, all these shortcomings are made to look even more antiquated with advances in virtual desktop, and server side, monitoring capabilities. 

#3: User performance evaluation is limited to clickstream data

Going from zero user performance transparency to usage click stream at the desktop is an improvement. But relying solely on the clickstream data to evaluate user performance is really not acceptable with the significant advances in low cost, low-friction machine data indexing capabilities. Today user performance analytics should include process workflow data direct from applications. Better programs will also absorb user service data such as incidents, help requests, training asset usage and user satisfaction data - providing a 360 degree view of user performance.