ThinKiosk - Software defined Windows Thin Clients

A software-only solution that repurposes existing Windows devices into secure, fully-featured, centrally managed Windows thin clients.

View Product

Key Benefits

  • Repurpose existing hardware
  • Familiar Windows interface
  • Centralized management
  • Maximise your VDI investment
  • Support for Citrix, VMware and Microsoft

Secure Remote Worker - Secure connectivity from a personal Windows device

A software-only solution that turns a personal Windows device into a secure software defined thin client enabling secure remote access and BYOD.

View Product

Key Benefits

  • Connect from Personal Devices
  • PCI Compliance
  • End Point Validation
  • Easy Deployment
  • Remove logistical complexities

IntelliPerform - Powering the optimization of desktop performance

A software optimization solution that maximizes the performance of desktop infrastructure resources.

View Product

Key Benefits

  • CPU Optimization
  • Memory Optimization
  • Advanced rules-based engine
  • Comprehensive reporting tools
  • Centralized management
30 November 2018

Powering the Optimization of Desktop Performance

 

Deploying any virtual desktop or published application-based solution is often seen as a bit of a mystery rather than an exact science when it comes to infrastructure sizing. Questions such as “how much CPU will I need?” or “how much memory is required?” are typical.

How do you size for performance?

This in turn leads to more questions around the number of host servers that are required and what the configuration of those servers may or may not look like. Then of course there is that famous question that always comes up when talking about any virtual user-based solution and that’s “how many users per core can I deploy?”. Now I’m not saying that these things are not important, of course they are, but it’s about when they become important. With end user computing you should always put the end users front and center, delivering them the kind of performance levels they have come to expect from the physical world, and so the actual answers to these questions will come once you understand what your end users need and want.

Why performance is key

Why is getting this right so important? It boils down to the fact that delivering the best performing end user experience results in end user satisfaction, i.e. happy end users. In turn this makes life easier for the IT admin and support teams but more critically it guarantees end users remain productive which directly affects the bottom line of a business.

Why wait until it all goes wrong: Proactive versus reactive

The process of looking at performance, for me, can typically be described in three distinct layers. The first layer represents the start of the process and describes the design phase of a project where you look at things such as sizing a solution. The middle layer is the business as usual layer where end users are going about the business, using their desktops and systems, and finally we have the last layer. I describe this layer as the ‘oh $#%&’ layer, where something has gone wrong and is directly affecting user performance and productivity and you now need to go back and try to fix it.

For me however, performance enhancement and optimization is an ongoing process and should be managed throughout the entire lifecycle of a solution to ensure end users are always guaranteed being delivered the best experience possible through an optimized high-performance environment. Especially now that IT is measured by customer satisfaction stats.

Add 20% extra hardware for performance, just in case!

Let’s wind back to the start of a project. When it comes to processor sizing for example, you could choose faster CPU’s with higher clock frequencies or CPU’s with more cores. However, the cost soon starts to ramp up and often won’t scale linearly. You could of course stick with slower CPU’s and instead scale the number of servers instead. It’s that age-old conundrum of scaling out versus scaling up. You could always take the non-scientific route to server resource configuration and just add 10 to 20% on top of whatever you decide to deploy!

Understanding resource utilization

When you look at it, it’s the apps that the end users are running that are important. You first need to understand what the OS and the apps are demanding in terms of system resources. Armed with this information you can start to dynamically allocate these resources rather than set hard limits which is what happens by default. This means that resources can be targeted to where they are needed, while at the same time prevent this overzealous apps from taking all the resources. It’s almost like considering concurrent usage, but instead for resources. The same applies to other resources such as memory, disk, and network. In virtual infrastructure solutions the hypervisor, to some degree, manages this, but not at a granular enough level to stop apps from taking more than their fair share of available resources. It’s also often complex to deploy.

Start at the beginning

At the start of a project, and to get the most performant solution for the budget, you will likely purchase the best servers you can based on the budget you have. But why not approach that differently and introduce a performance optimization solution right at the start. It could not only save you money, but also build in a contingency budget should you need it later. More important than that, it also means that end users get the best user experience right off the bat.

If we fast forward to the end of a project, where, more often than not, you have to revisit the performance and user experience to add some extra horse power, its near on impossible to now go back cap in hand to ask for additional budget for additional hardware. It’s just not going to happen.

This is where a performance optimization and enhancement solution again will come into its own. It will enable you to fine tune the infrastructure and ensure that apps and users are consuming just what they require, and that there aren’t any rogue apps and processes consuming more than they are entitled too. End users will now have a far better user experience, one that is not degraded in any way by resource hungry processes.

Introducing IntelliPerform from ThinScale

ThinScale have an answer in their portfolio of software-defined solutions for delivering the digital workspace, that addresses the problem of managing and optimizing desktop infrastructure resources. With the new release of IntelliPerform, which also encompasses the previous ThreadLocker CPU solution, you can now solve memory performance issues too.

With IntelliPerform IT now can ‘tame’ these resource hungry apps and processes, ensuring that they only use what they need, and more importantly don’t affect other end users. You can also prioritize apps, meaning those business-critical apps are always guaranteed the resources they need. With it’s advanced reporting, IT could even pinpoint resource usage and cross charge individual departments based on the amount of resources they use.

Quite often with performance enhancement solutions they target a single, individual resources such as CPU or memory. However, fixing a CPU performance issue can then go on to impact memory, memory then affects disk as it pages and all you do is move the problem around the infrastructure and never actually solve the issue. IntelliPerform delivers a solution that covers both CPU and memory, and with ThinIO added in to the mix you can take care of disk too.

Thanks for reading,

David

David Coombes - Technical Director

Connect with me on LinkedIn or Twitter.

Visit IntelliPerform page

download free trial

Back to all