10 best practices for using Windows servers

Follow these tips to keep your Windows-based servers operating smoothly, securely, and efficiently.

Image: EvgeniyShkolenko, Getty Images/iStockphoto

The internet can be a hostile place. Ask any IT professional worth their salt, and they’ll attest to the importance of provisioning systems in a secure, concise manner to ensure that new systems are able to provide the services required of them in a protected way. And while automating this process goes a long way in cutting down the on-boarding time, the real test of a system lies in its ongoing ability to continue to provide services stably and without interruption.

SEE: Change control management: 10 critical steps (free PDF) (TechRepublic)

Automated tools exist to ensure that your Windows servers stay as secure and trouble-free as the day they were set up. However, since all organizations are different and have varying needs and budgets, tools such as Microsoft System Center Configuration Manager for example, may not be readily available. This should not prevent IT from doing everything in its power to leverage its infrastructure to keep systems running properly.

Here is a simple set of management principles that are easy to implement at any budget and skill level to help your IT department take hold of its Windows servers and make sure they are managed efficiently and securely, while being optimized to deliver the best performance possible.

SEE: Windows 10: Containers are the future, and here’s what you need to know (TechRepublic)

1. Audit log-in policy

All servers should be effectively off-limits to all local or interactive logins. This means no one should be physically logging on to a server and using it as if it were a desktop–no matter what their access level is. This behavior only leads to disaster at some point down the line. Beyond monitoring interactive logins, IT should have a policy in place to audit other access types to their servers, including but not limited to object access, security permissions, and other changes that might be made to the server with or without authorization.

2. Centralize event logs

Windows servers have a great deal of logging capability available by default. Through configuration, logging capability can be made greater or limited, including increasing the sizes of log files, whether or not they get overwritten, and even where they reside. Centralizing all these various logs in one place makes it simpler for IT personnel to access and comb through. By using some sort of syslog server, these logs can be made easier to sift through by assigning categories to specific entries, such as labeling all failed login attempts, for example. It also helps if logs are also made searchable and, should the syslog server have the capability, have integration with remediation tools to correct any issues reported.

SEE: How to become a network administrator: A cheat sheet (TechRepublic)

3. Performance benchmarks and baselines

We all know how to tell when a server or service isn’t operating at all. But how does your IT department quantify if a server or service is performing as well as it should? This is why taking benchmark readings of your servers and developing baselines for its operation at different intervals (peak, off-peak, etc.) over a finite period of time pays dividends. Armed with this information, you can  determine how to proceed when optimizing software and hardware settings, how services are affected throughout the day, and what resources can be added, removed, or simply moved around to ensure that a continued minimum level of service is always assured. This also helps identify possible attack vectors or indicators of compromise when anomalies are found that could negatively affect performance.

4. Restrict remote access

As admins, we all love our remote access, don’t we? I know I’ve used remote desktop protocol (RDP) pretty much daily to fix issues on remote systems for decades of my career. And while it’s come a long way in beefing up security through enhanced encryption, the fact remains that RDP (like all remote access applications), if left unchecked, provides an inroad to your servers, and more importantly, the company’s network. Lucky for us, access to servers and their services can be restricted through a number of ways, such as configuring firewall rules to limit access to servers from remote connections, requiring VPN tunneling be used to secure communications to and from network resources, and configuring the use of certificate-based authentication to verify that the system being connected–both to and from–are repudiated and trusted.

SEE: WI-FI 6 (802.11AX): A cheat sheet (free PDF) (TechRepublic)

5. Configuring services

Windows Server has come a long way since the earlier days when most roles and services were enabled by default, despite whether or not they were to be used by the organization. This obviously presents a glaring security issue and is still a problem today, albeit more controlled in modern server versions. Nonetheless, limiting the attack surface of your servers only serves to remove potential vectors for compromise from the loop, and that’s always a good thing. Assessing your environment’s needs and the dependencies of the software and services running on your network can help to develop a plan to disable or remove unnecessary services from running. 

6. On-going monitoring

This goes hand-in-glove with your network and security threats. You should be monitoring your server’s health to identify any potential issues before they become a serious threat to the performance of the devices or services they provide. This helps IT by also allowing them to proactively determine if any servers are in need of upgrades or resources, or should the department purchase more servers to add to the cluster, again, in an effort to keep services online.

SEE: Server virtualization: Best (and worst) practices (free PDF) (TechRepublic)

7. Patch management

This recommendation should be a no-brainer for anyone in IT, regardless of experience or skillset. If there’s one thing on this list that all servers need, its patch management. From simple updates that quash bugs to corrective fixes that close holes in security, setting up a process for updating the OS and software is of paramount importance. As a matter of fact, it’s so important that in integrated environments where multiple Microsoft products are being used, some versions of software and services will simply not work until the underlying Windows Server OS is updated to a minimum level, so bear that in mind when planning your testing and updating cycles.

8. Technical controls

Whether you’re implementing security devices such as a network intrusion prevention system or your clustered servers need load balancers, use the data you’ve gleaned from your monitoring and baselines to perform a needs assessment for your various servers and the services they provide. Doing so will help identify which systems require additional controls, such as a web server running the enterprise’s web application for HR records. Installing a web access firewall (WAF) to identify known web-based attacks, such as cross-site scripting (XSS) or structure query language (SQL) inject attacks against the SQL database backend, which powers it.

SEE: How to run parallel commands on remote servers with PSSH (TechRepublic) 

9. Lockdown physical access

From personal experience, most organizations from medium to large, recognize the need to isolate their servers for security and HVAC reasons. This is great! But it’s not good when small companies simply choose to leave their servers out in the open alongside other desktops. It’s downright horrible, actually, since your server and the communications to and from said devices are now exposed to myriad potential threats and attacks. Please place servers in secured rooms with adequate ventilation, and limit the access to this room to only those who need it.

10. Disaster recovery protection

Backup, backup, backup! This topic has been beaten to death, and yet here we are. We’re still seeing that some organizations do not take the proper steps to adequately and securely backup their precious data. Then the inevitable happens–and the server is downed, data is lost, and there’s no recourse. But there would’ve been, had there been an active disaster recovery plan to identify what data needed protection, dictate how it was to be backed up, when, and to where, along with the documented steps to restore it. At its core, it’s a very easy-to-remember process: 3-2-1: Three backups; two separate media types, at least one off-site.

SEE: 10 tips for planning a data center hardware refresh (TechRepublic)

This list is not exhaustive by any means, and IT pros should explore each point fully to identify what solutions will work best for their specific needs. Additionally, it is highly desirable that IT meet with senior management to establish policies for performing regular risk assessments, as this will help IT determine where to best place resources (financial, technical, and hardware/software) so that they are used to their highest potential.

Also see