Patch Management Practices

Patch Management Practices to Keep Your Clients Secure

Develop a Policy of Who, What, When, Why, and How for Patching Systems

The first step in your patch management strategy is to come up with a policy around the entire patching practice. Planning in advance enables you to go from reactive to proactive—anticipating problems in advance and develop policies to handle them.

The right patch management policy will answer the who, what, when, why, and how for when you receive a notification of a critical vulnerability in a client’s software.

Create a Process for Patch Management

Now that you’ve figured out the overall patch management policy, you need to create a process on how to handle each patch as they’re released.

Your patch management policy should be explicit within your security policy, and you should consider Microsoft’s® six-step process when tailoring your own. The steps include:

Notification: You’re alerted about a new patch to eliminate a vulnerability. How you receive the notification depends on which tools you use to keep systems patched and up to date.

Assessment: Based on the patch rating and configuration of your systems, you need to decide which systems need the patch and how quickly they need to be patched to prevent an exploit.

Obtainment: Like the notification, how you receive the patch will depend on the tools you use. They could either be deployed manually or automatically based on your determined policy.

Testing: Before you deploy a patch, you need to test it on a test bed network that simulates your production network. All networks and configurations are different, and Microsoft can’t test for every combination, so you need to test and make sure all your clients’ networks can properly run the patch.

Deployment: Deployment of a patch should only be done after you’ve thoroughly tested it. Even after testing, be careful and don’t apply the patch to all your systems at once. Incrementally apply patches and test the production server after each one to make sure all applications still function properly.

Validation: This final step is often overlooked. Validating that the patch was applied is necessary so you can report on the status to your client and ensure agreed service levels are met.

Be Persistent in Applying the Best Practices

For your patch management policies and processes to be effective, you need to be persistent in applying them consistently. With new vulnerabilities and patches appearing almost daily, you need to be vigilant to keep up with all the changes.

Patch management is an ongoing practice. To ensure you’re consistently applying patches, it’s best to follow a series of repeatable, automated practices. These practices include:

  • Regular rediscovery of systems that may potentially be affected
  • Scanning those systems for vulnerabilities
  • Downloading patches and patch definition databases
  • Deploying patches to systems that need them
Take Advantage of Patching Resources

Since the release of Windows 10, updates to the operating system are on a more fluid schedule. Updates and patches are now being released as needed and not on a consistent schedule. You’ll need to let your team know when an applicable update is released to ensure the patch can be tested and deployed as soon as possible.

As the number of vulnerabilities and patches rise, you’ll need to have as much information about them as you can get. There are a few available resources we recommend to augment your patch management process and keep you informed of updates that may fall outside of the scope of Microsoft updates.

Utilize Patching Tools

You don’t want your technicians spending most of their time approving and applying patches on individual machines, especially as your business grows and you take on more clients. To take the burden off your technicians, you’ll want to utilize a tool that can automate your patch management processes. This can be accomplished with a remote monitoring and management (RMM) platform, like ConnectWise Automate®. Add-ons can be purchased to manage third-party application patching to sure up all potential vulnerabilities.

Patch management is a fundamental service provided in most managed service provider (MSP) service plans. With these best practices, you’ll be able to develop a patch management strategy to best serve your clients and their specific needs.


This article was provided by our service partner : connectwise.com

Password Constraint Research

Password Constraints and Their Unintended Security Consequences

You’re probably familiar with some of the most common requirements for creating passwords. A mix of upper and lowercase letters is a simple example. These are known as password constraints. They’re rules for how you must construct a password. If your password must be at least eight characters long, contain lower case, uppercase, numbers and symbol characters, then you have one length, and four character set constraints.

Password constraints eliminate a number of both good and bad passwords. I had never heard anyone ask “how many potential passwords, good and bad, are eliminated?” And so I began searching for the answer. The results were surprising. If you want to know the precise number of possible 8-character passwords there are if all of the character sets must be used, then the equation looks something like this.

A serious limitation of this approach is that it tells you nothing about the effects of each constraint alone or relative to other constraints. (I’m also not sure if there were supposed to be four consecutive ∑s or if the mathematician was stuttering.)

We choose to use a Monte Carlo simulation to analyze the mathematical impact of the various combinations of constraints. A Monte Carlo simulation uses a statistical analysis approach that provides a close approximation of the answer, while also providing the flexibility to quickly and easily measure the impact of each constraint and combination of constraints.

A look at minimum length limits

To start, let’s look at the impact of an eight-character length constraint alone. There are 95^8 possible combinations of 8 characters. 26 uppercase letters + 26 lowercase letters + 10 numerals + 33 symbols = 95 characters. For a length of 8 characters, we have 95˄8 possible passwords.

If a password must be at least 8 characters long, then there are also about 70.6 trillion otherwise viable passwords you are not allowed to use (95+(95^2 ) +(95^3 ) +(95^4 ) +(95^5)+(95^6 )+(95^7)). That’s a good thing. It means you can’t use 95 one character passwords, 9,025 two character passwords, and so on. Almost 70 trillion of those passwords you cannot use are seven characters long. This is a great and wholly intended effect of a password length constraint.

The problem with a lack of constraints is that people will use a very small set of all possible passwords, which invariably includes passwords that are incredibly easy to guess. In the analysis of over one million leaked passwords, it was found that 30.8 percent passwords eight to 11 characters long contained only lowercase letters, and 43.9 percent contained only lowercase letters and numbers.  In fact, to perform a primitive brute force attack against an eight-character password containing only lower case letters, it’s only necessary to try about 209 billion character combinations. That does not take a computer very long to crack. And, as we know from analyzing large numbers of passwords, it’s likely to contain one of the most popular ten thousand passwords.

To beef up security, we begin to add character constraints. But, in doing so, we decrease the number of possible passwords; both good and bad.

Just by requiring both uppercase and lowercase letters, more than 15 percent of all possible 8-character combinations have been eliminated as possible passwords. This means that 1QV5#T&|cannot be a password because there are no lowercase letters. Compared to Darnrats,which meets the constraint requirements, 1QV5#T&|is a fantastic password. But you cannot use it. Superior passwords that cannot be used are acceptable collateral damage in the battle for better security. “Corndogs” is acceptable, but “fruit&veggies” is not. This clearly is not a battle for lower cholesterol.

As constraints pile up, possibilities shrink

If a password must be exactly eight characters long and contain at least one lower case letter, at least one uppercase letter and at least one symbol, we are getting close to one-in-five combinations of 8 characters that are not allowable as passwords. Still, the effect of constraints on 12 and 16 character passwords is negligible. But that is all about to change… you can count on it.

Are you required to use a password that is at least eight characters long, has lower and uppercase letters, number and symbols? Just requiring a number to be part of a password removes over 40 percent of 8-character combinations from the pool of possible passwords. Even though you can use lowercase and uppercase letters, and you can use symbols, if one of the characters in your password must be a number then there are far fewer great passwords that you can use. If a 16 character long password must have a number, then 13 times more potential passwords have become illegal as a result of that one constraint than the combined constraints of lower and uppercase letters and symbols caused. More than one-in-four combinations of 12 characters can no longer become a passwords either.

You might have noticed that there is little effect on the longer passwords. Frequently there is also very little value in imposing constraints on long passwords. This is because each additional character in a password grows the pool of passwords exponentially. There are 6.5 million times as many combinations of 16 character pass words using only lowercase letters than there are of eight character passwords using all four character sets. That means that “toodlesmypoodles” is going to be a whole lot harder to crack than “I81B@gle”

Long and simple is better than short and hard

People tend to be very predictable. There are more symbols (than there are in any other characters set. Theoretically that means that symbols are going to do the most to make a password strong, but 80 percent of the time it is going to be one of the top five most frequently used symbols, and 95 percent of the time is will be one of the top 10 most frequently used symbols.

Analysis of two million compromised passwords showed that about one in 14 passwords start with the number one, however for those that started with the number one, 75 percent of them ended with a number as well.

The use of birthdays and names, for example, make it much easier to quickly crack many passwords.

Password strength: It’s length, not complexity that matters

As covered above, all four character sets (95 characters) in an eight character password allow for about 6.634 quadrillion different password possibilities. But a 16 character password with only lowercase letters has about 43.8 sextillion possible passwords. That means that there are well over 6.5 million times more possible passwords for 16 consecutive lowercase letters than for any combination of eight characters regardless of how complex the password is.

My great password is “cats and hippos are friends!”, but I can’t use it because of constraints – and because I just told you what it is.

For years password experts have been advocating for the use of simple passphrases over complex passwords because they are stronger and simpler to remember. I’d like to throw a bit of gasoline on to the fire and tell you, those 95^8 combinations of characters are only  half that many when you tell me I have to use uppercase, lowercase, numbers, and symbols.

———————————————————————————————————————————-

 

Asset Management

Don’t Ignore Security Activity That Could Help the Most

We tend to think of security as the tools—like email scanning, malware, and antivirus protection—we have in place to secure our network. But did you know that the process of asset management helps you minimize the threat landscape too?

Management of software and hardware has historically been treated as a cost-minimizing function, where tracking assets could be the difference between driving or reducing value, from an organizational perspective. However, even the best security plan is only as strong as its weakest link. If IT administrators are unaware where assets reside, the software running on them, and who has access, they are at risk.

Understanding the device, as well as the data, is what matters here. Having an in-depth knowledge of the network of devices and their data is the first step in protecting it. Often, organizations have the tools in place to support and maintain the device, but once in place on the network, it can be easy to set it and forget it until it need repair, replacement, or up for review. Conducting asset management on a regular basis should be a fundamental function for your security plan and can strengthen the security tools you already have in place. Remember, asset management has to be continuous for it to be truly effective.

When you’re conducting continuous asset management you can always answer the following questions should an incident occur:

  • What devices are currently connected to the internet?
  • How many total systems do you have?
  • Where is your data?
  • How many vendors do you have?
  • Which vendors have what kind of your data?

Companies struggle with consistent and mature asset management because they often don’t have the time or dedicated resources to stay on top of it. However, an IT asset management program can add value by reducing costs, improving operational efficiency, determining full cost, and providing a forecast for future investments. Oversight and governance help to solidify policies and procedures already in place.

ConnectWise Automate® complements and strengthens security tools and processes by significantly improving the ability to discover, inventory, manage, and report. Additional tool sets–like antivirus and malware protection—can be added to help further protect data and reduce operational risk.

recent study of the Total Economic Impact of ConnectWise showed, “Organizations estimated that they could shorten engineers’ involvement by 60%, thus cutting the cost of hardware maintenance by $1.2 million.”


This article was provided by our service partner : Connectwise.

msp evolving threats

MSP Responding to Risk in an Evolving Threat Landscape

There’s a reason major industry players have been discussing cybersecurity more and more: the stakes are at an all-time high for virtually every business today. Cybersecurity is not a matter businesses can afford to push off or misunderstand—especially small and medium-sized businesses (SMBs), which have emerged as prime targets for cyberattacks. The risk level for this group in particular has increased exponentially, with 57% of SMBs reporting an increase in attack volume over the past 12 months, and the current reality—while serious—is actually quite straightforward for managed service providers (MSPs):

  • Your SMB clients will be attacked.
  • Basic security will not stop an attack.
  • The MSP will be held accountable.

While MSPs may have historically set up clients with “effective” security measures, the threat landscape is changing and the evolution of risk needs to be properly, and immediately, addressed. This means redefining how your clients think about risk and encouraging them to respond to the significant increase in attack volume with security measures that will actually prove effective in today’s threat environment.

Even if the security tools you’ve been leveraging are 99.99% effective, risk has evolved from minimal to material due simply to the fact that there are far more security events per year than ever before.

Again, the state of cybersecurity today is pretty straightforward: with advanced threats like rapidly evolving and hyper-targeted malware, ransomware, and user-enabled breaches, foundational security tools aren’t enough to keep SMB clients secure. Their data is valuable, and there is real risk of a breach if they remain vulnerable.Additional layers of security need to be added to the equation to provide holistic protection. Otherwise, your opportunity to fulfill the role as your clients’ managed security services providerwill be missed, and your SMB clients could be exposed to existential risk.

Steps for Responding to Heightened Risk as an MSP

 

Step 1: Understand Risk

Start by discussing “acceptable risk.” Your client should understand that there will always be some level of risk in today’s cyber landscape. Working together to define a businesses’ acceptable risk, and to determine what it will take to maintain an acceptable risk level, will solidify your partnership. Keep in mind that security needs to be both proactive and reactive in its capabilities for risk levels to remain in check.

Step 2: Establish Your Security Strategy

Once you’ve identified where the gaps in your client’s protection lie, map them to the type of security services that will keep those risks constantly managed. Providing regular visibility into security gaps, offering cybersecurity training,and leveraging more advanced and comprehensive security tools will ultimately get the client to their desired state of protection—and that should be clearly communicated upfront.

Step 3: Prepare for the Worst

At this point, it’s not a question of ifSMBs will experience a cyberattack, but when. That’s why it’s important to establish ongoing, communicative relationships with all clients. Assure clients that your security services will improve their risk level over time, and that you will maintain acceptable risk levels by consistently identifying, prioritizing, and mitigating gaps in coverage. This essentially justifies additional costs and opens you to upsell opportunities over the course of your relationship.

Step 4: Live up to Your Promises Through People, Processes, and Technology

Keeping your security solutions well-defined and client communication clear will help validate your offering. Through a combination of advanced software and services, you can build a framework that maps to your clients’ specific security needs so you’re providing the technologies that are now essential for securing their business from modern attacks.

Once you understand how to effectively respond to new and shifting risks, you’ll be in the best possible position to keep your clients secure and avoid potentially debilitating breaches.

Windows Server 2019

Windows Server 2019 and what we need to do now: Migrate and Upgrade!

IT pros around the world were happy to hear that Windows Server 2019 is now generally available and since there have been some changes to the release. This is a huge milestone, and I would like to offer congratulations to the Microsoft team for launching the latest release of this amazing platform as a big highlight of Microsoft Ignite.

As important as this new operating system is now, there is an important subtle point that I think needs to be raised now (and don’t worry – Veeam can help). This is the fact that both SQL Server 2008 R2 and Windows Server 2008 R2 will soon have extended support ending. This can be a significant topic to tackle as many organizations have applications deployed on these systems.

What is the right thing to do today to prepare for leveraging Windows Server 2019? I’m convinced there is no single answer on the best way to address these systems; rather the right approach is to identify options that are suitable for each workload. This may also match some questions you may have. Should I move the workload to Azure? How do I safely upgrade my domain functional level? Should I use Azure SQL? Should I take physical Windows Server 2008 R2 systems and virtualize them or move to Azure? Should I migrate to the latest Hyper-V platform? What do I do if I don’t have the source code? These are all indeed natural questions to have now.

These are questions we need to ask today to move to Windows Server 2019, but how do we get there without any surprises? Let me re-introduce you to the Veeam DataLab. This technology was first launched by Veeam in 2010 and has evolved in every release and update since. Today, this technology is just what many organizations need to safely perform tests in an isolated environment to ensure that there are no surprises in production. The figure below shows a data lab:

windows 2008 eol

Let’s deconstruct this a bit first. An application group is an application you care about — and it can include multiple VMs. The proxy appliance isolates the DataLab from the production network yet reproduces the IP space in the private network without interference via a masquerade IP address. With this configuration, the DataLab allows Veeam users to test changes to systems without risk to production. This can include upgrading to Windows Server 2019, changing database versions, and more. Over the next weeks and month or so, I’ll be writing a more comprehensive document in whitepaper format that will take you through the process of setting up a DataLab and doing specific task-like upgrading to Windows Server 2019 or a newer version of SQL Server as well as migrating to Azure.

Another key technology where Veeam can help is the ability to restore Veeam backups to Microsoft Azure. This technology has been available for a long while and is now built into Veeam Backup & Replication. This is a great way to get workloads into Azure with ease starting from a Veeam backup. Additionally, you can easily test other changes to Windows and SQL Server with this process — put it into an Azure test environment to test the migration process, connectivity and more. If that’s a success, repeat the process as part of a planned migration to Azure. This cloud mobility technique is very powerful and is shown below for Azure:

Windows 2008 EOL

Why Azure?

This is because Microsoft announced that Extended Security Updates will be available for FREE in Azure for Windows server 2008 R2 for an additional three years after the end of the support deadline. Customers can rehost these workloads to Azure with no application code changes, giving them more time to plan for their future upgrades. Read more here.

What also is great about moving workloads to Azure is that this applies to almost anything that Veeam can back up. Windows Servers, Linux Agents, vSphere VMs, Hyper-V VMs and more!

Migrating to the latest platforms are a great way to stay in a supported configuration for critical applications in the data center. The difference is being able to do the migration without any surprises and with complete confidence. This is where Veeam’s DataLabs and Veeam Recovery to Microsoft Azure can work in conjunction to provide you a seamless experience in migrating to the latest SQL and Windows Server platforms.

Have you started testing Windows Server 2019? How many Windows Server 2008 R2 and SQL Server 2008 systems do you have? Let’s get DataLabbing!

Considerations in a multi-cloud world

With the infrastructure world in constant flux, more and more businesses are adopting a multi-cloud deployment model. The challenges from this are becoming more complex and, in some cases, cumbersome. Consider the impact on the data alone. 10 years ago, all anyone worried about was if the SAN would stay up, and if it didn’t, would their data be protected. Fast forward to today, even a small business can have data scattered across the globe. Maybe they have a few vSphere hosts in an HQ, with branch offices using workloads running in the cloud or Software as a Service-based applications. Maybe backups are stored in an object storage repository (somewhere — but only one guy knows where). This is happening in the smallest of businesses, so as a business grows and scales, the challenges become even more complex.

Potential pitfalls

Now this blog is not about how Veeam manages data in a multi-cloud world, it’s more about how to understand the challenges and the potential pitfalls. Take a look at the diagram below:

cloud services

Veeam supports a number of public clouds and different platforms. This is a typical scenario in a modern business. Picture the scene: workloads are running on top of a hypervisor like VMware vSphere or Nutanix, with some services running in AWS. The company is leveraging Microsoft Office 365 for its email services (people rarely build Exchange environments anymore) with Active Directory extended into Azure. Throw in some SAP or Oracle workloads, and your data management solution has just gone from “I back up my SAN every night to tape” to “where is my data now, and how do I restore it in the event of a failure?” If worrying about business continuity didn’t keep you awake 10 years ago, it surely does now. This is the impact of modern life. The more agility we provide on the front end for an IT consumer, the more complexity there has to be on the back end.

With the ever-growing complexity, global reach and scale of public clouds, as well as a more hands-off approach from IT admins, this is a real challenge to protect a business, not only from an outage, but from a full-scale business failure.

Managing a multi-cloud environment

When looking to manage a multi-cloud environment, it is important to understand these complexities, and how to avoid costly mistakes. The simplistic approach to any environment, whether it is running on premises or in the cloud, is to consider all the options. Sounds obvious, but that has not always been the case. Where or how you deploy a workload is becoming irrelevant, but how you protect that workload still is. Think about the public cloud: if you deploy a virtual machine, and set the firewall ports to any:any, (that would never happen would it?), you can be pretty sure someone will gain access to that virtual machine at some point. Making sure that workload is protected and recoverable is critical in this instance. The same considerations and requirements always apply whether running on premises or off premises.  How do you protect the data and how do you recover the data in the event of a failure or security breach?

What to consider when choosing a cloud platform?

This is something often overlooked, but it has become clear in recent years that organizations do not choose a cloud platform for single, specific reasons like cost savings, higher performance and quicker service times, but rather because the cloud is the right platform for a specific application. Sure, individual reason benefits may come into play, but you should always question the “why” on any platform selection.

When you’re looking at data management platforms, consider not only what your environment looks like today, but also what will it look like tomorrow. Does the platform you’re purchasing today have a roadmap for the future? If you can see that the company has a clear vision and understanding of what is happening in the industry, then you can feel safe trusting that platform to manage your data anywhere in the world, on any platform. If a roadmap is not forthcoming, or they just don’t get the vision you are sharing about your own environment, perhaps it’s time to look at other vendors. It’s definitely something to think about next time you’re choosing a data management solution or platform.


This article was provided by our service partner: veeam.com

Windows 10 October 2018 Update

Earlier today, Yusuf Mehdi announced the Windows 10 October 2018 Update, the newest feature update for Windows 10. I’m excited to share our October 2018 Update rollout plans, how you can get the update today, plus some new update experience enhancements.

How to get the Windows 10 October 2018 Update

As with prior Windows 10 feature rollouts, our goal is to deliver the October 2018 Update in a phased and controlled rollout to provide a great update experience for all. We are beginning the global rollout out via Windows Update in the coming weeks.  As with previous rollouts, we will use real-time feedback and telemetry to update your device when data shows your device is ready and will have a great experience. You don’t have to do anything to get the update; it will roll out automatically to you through Windows Update.

Once the update is downloaded to your device and ready to be installed we’ll notify you.  You are then able to pick a time that won’t disrupt you to finish the installation and reboot.   We are continually working to improve the update experience with each new release of Windows 10.

Windows updates

The last Windows 10 feature update rollout, the April 2018 Update, utilized machine learning (ML) to identify devices that were ready to update, incorporating key attributes like compatibility data. By leveraging machine learning we were able to safely rollout quickly, and as a result the April 2018 Update is now the most widely used version of Windows 10.  Further, our artificial intelligence/ML targeted rollout approach led to the lowest call and online support requests for any release of Windows 10.

With the October 2018 Update, we are expanding our use of machine learning and intelligently selecting devices that our data and feedback predict will have a smooth update experience. We will be further enhancing the performance of our machine learning model by incorporating more device signals such as improved driver telemetry and weighting of key features such as anti-malware software as we broaden the phased rollout. As we did with the April 2018 Update, we will be proactively monitoring all available feedback and update experience data, making the appropriate product updates when we detect issues, and adjusting the rate of rollout as needed to assure all devices have the best possible update experience.

Want the Windows 10 October 2018 Update today? Start by manually checking for updates

While we encourage you to wait until the update is offered to your device, if you’re an advanced user on an actively serviced version of Windows 10 and would like to install the Windows 10 October 2018 Update now, you can do so by manually checking for updates. In the Search box in the taskbar, type “Check for updates.” Once there, simply click “Check for updates” to begin the download and installation process. We are also streamlining the ability for users who seek to manually check for updates by limiting this to devices with no known key blocking issues, based on our ML model.  If we detect that your device has a compatibility issue, we will not install the update until that issue is resolved, even if you “Check for updates.”  You can also watch this video that outlines how to get the October 2018 Update.

windows 10 update settingswindows 10 update settings 2

If you’re using a Windows 10 PC at work, you will need to check with your IT administrator for details on your organization’s specific plans to update.

Improving the update experience

We have heard clear feedback that while our users appreciate that updates keep their devices secure, they find the update experience can sometimes be disruptive.  The October Update includes several improvements to the update experience to offer more control and further reduce disruptions.

Intelligent scheduling of update activity: For our many mobile users on laptops and 2-in-1 devices, we have improved Window’s ability to know when a device will not be in use and perform certain update activities then, so as not to disrupt the user. This ability to update at night when plugged in and not on battery power will help hide update activity and minimize user disruption from updates. To further minimize disruption (in case your system is updating overnight), Windows also silences audio when it wakes for Windows Updates.   If your device hasn’t updated for several nights, we will then suggest you plug in your device so that we can update at night.

windows 10 update nightime

Intelligent reboot scheduling:  Windows Update will now automatically determine the least disruptive opportunity, outside of Active Hours, and will use an enhanced machine-learning-powered activity check that can determine if a user is going to be away for a while or is only stepping away temporarily.

Faster updates, less down time:  We’ve also made further improvements to the feature update installation process and are targeting to further shorten the amount of time your device is offline during updates by up to 31% compared to the Windows 10 April 2018 Update (based on results from the Windows Insider Program) during the rollout of the October Update.

Smaller downloads:  In the October Update we are introducing a new update package delivery design for monthly quality updates that creates a compact update package for easier and faster deployment.  Users will benefit from the new small update size when installing applicable quality updates as they are 40% more efficient.

Enhanced privacy controls

We continue to focus on putting our customers in control so in the October Update we are enhancing the privacy choice and controls available to users to manage their privacy.  We are now enabling each new account on a device to personally tailor the main privacy settings, instead of only the initial user who sets up the device.   Furthermore, during new device setup, we now offer an activity history page that allows users the opportunity to opt in to sending activity history to Microsoft, to help improve cross device experiences.  This allows users to pick up where they left off in various activities (such as a working on a Word document) on their other devices (Learn more about activity history).

Additionally, we are splitting Inking & typing personalization out from the Speech privacy page.  This enables more granular control of your inking and typing personalization data by managing it separately from your online speech recognition data. Learn more about online speech recognition and inking & typing personalization.

nking typing personalization.

Semi-Annual Channel (Targeted) released

For our commercial customers, the release of the Windows 10, version 1809 on October 2, 2018 marks the start of the servicing timeline for the Semi-Annual Channel (“Targeted”) release; and beginning with this release, all future feature updates of Windows 10 Enterprise and Education editions that release around September will have a 30 month servicing timeline.  Just as we’re immediately beginning rolling out the October Update in phases to consumers, we recommend IT administrators do the same within their organizations to validate that apps, devices, and infrastructure used by their organization work well with the new release before broadly deploying. We use data to guide our phased consumer rollout and encourage commercial customers to do the same through Windows AnalyticsThe update is now available through Windows Server Update Services (WSUS)Windows Update for Business (WUfB) and System Center Configuration Manager’s (SCCM) phased deployment.  For an overview of what’s new and what’s changed, please see What’s new for IT pros in Windows 10, version 1809.

Continuously evolving Windows 10 and the update experience

We’re excited to bring you the latest Windows 10 Features and improvements and hope that you enjoy the improved update experience.    Please provide us feedback as we continue our journey to evolve the update experience, so that our great new product and security features and other enhancements arrive without disruption.


This article was provided by our service partner : Microsoft.com

 

 

 

How to properly load balance your backup infrastructure

Veeam Backup & Replication is known for ease of installation and a moderate learning curve. It is something that we take as a great achievement, but as we see in our support practice, it can sometimes lead to a “deploy and forget” approach, without fine-tuning the software or learning the nuances of its work. In our previous blog posts, we examined tape configuration considerations and some common misconfigurations. This time, the blog post is aimed at giving the reader some insight on a Veeam Backup & Replication infrastructure, how data flows between the components, and most importantly, how to properly load balance backup components so that the system can work stably and efficiently.

Overview of a Veeam Backup & Replication infrastructure

Veeam Backup & Replication is a modular system. This means that Veeam as a backup solution consists of a number of components, each with a specific function. Examples of such components are the Veeam server itself (as the management component), proxy, repository, WAN accelerator and others. Of course, several components can be installed on a single server (provided that it has sufficient resources) and many customers opt for all-in-one installations. However, distributing components can give several benefits:

  • For customers with branch offices, it is possible to localize the majority of backup traffic by deploying components locally.
  • It allows to scale out easily. If your backup window increases, you can deploy an additional proxy. If you need to expand your backup repository, you can switch to scale-out backup repository and add new extents as needed.
  • You can achieve a High Availability for some of the components. For example, if you have multiple proxies and one goes offline, the backups will still be created.

Such system can only work efficiently if everything is balanced. An unbalanced backup infrastructure can slow down due to unexpected bottlenecks or even cause backup failures because of overloaded components.

Let’s review how data flows in a Veeam infrastructure during a backup (we’re using a vSphere environment in this example):

veeam 1

All data in Veeam Backup & Replication flows between source and target transport agents. Let’s take a backup job as an example: a source agent is running on a backup proxy and its job is to read the data from a datastore, apply compression and source-side deduplication and send it over to a target agent. The target agent is running directly on a Windows/Linux repository or a gateway if a CIFS share is used. Its job is to apply a target-side deduplication and save the data in a backup file (.VKB, .VIB etc).

That means there are always two components involved, even if they are essentially on the same server and both must be taken into account when planning the resources.

Tasks balancing between proxy and repository

To start, we must examine the notion of a “task.” In Veeam Backup & Replication, a task is equal to a VM disk transfer. So, if you have a job with 5 VMs and each has 2 virtual disks, there is a total of 10 tasks to process. Veeam Backup & Replication is able to process multiple tasks in parallel, but the number is still limited.

If you go to the proxy properties, on the first step you can configure the maximum concurrent tasks this proxy can process in parallel:

veeam 2

For normal backup operations, a task on the repository side also means one virtual disk transfer.

On the repository side, you can find a very similar setting:

veeam 3

For normal backup operations, a task on the repository side also means one virtual disk transfer.

This brings us to our first important point: it is crucial to keep the resources and number of tasks in balance between proxy and repository.  Suppose you have 3 proxies set to 4 tasks each (that means that on the source side, 12 virtual disks can be processed in parallel), but the repository is set to 4 tasks only (that is the default setting). That means that only 4 tasks will be processed, leaving idle resources.

The meaning of a task on a repository is different when it comes to synthetic operations (like creating synthetic full). Recall that synthetic operations do not use proxies and happen locally on a Windows/Linux repository or between a gateway and a CIFS share. In this case for normal backup chains, a task is a backup job (so 4 tasks mean that 4 jobs will be able to generate synthetic full in parallel), while for per-VM backup chains, a task is still a VM (so 4 tasks mean that repo can generate 4 separate VBKs for 4 VMs in parallel). Depending on the setup, the same number of tasks can create a very different load on a repository! Be sure to analyze your setup (the backup job mode, the job scheduling, the per-VM option) and plan resources accordingly.

Note that, unlike for a proxy, you can disable the limit for number of parallel tasks for a repository. In this case, the repository will accept all incoming data flows from proxies. This might seem convenient at first, but we highly discourage from disabling this limitation, as it may lead to overload and even job failures. Consider this scenario: a job has many VMs with a total of 100 virtual disks to process and the repository uses the per-VM option. The proxies can process 10 disks in parallel and the repository is set to the unlimited number of tasks. During an incremental backup, the load on the repository will be naturally limited by proxies, so the system will be in balance. However, then a synthetic full starts. Synthetic full does not use proxies and all operations happen solely on the repository. Since the number of tasks is not limited, the repository will try to process all 100 tasks in parallel! This will require immense resources from the repository hardware and will likely cause an overload.

Considerations when using CIFS share

If you are using a Windows or Linux repository, the target agent will start directly on the server.  When using a CIFS share as a repository, the target agent starts on a special component called a “gateway,” that will receive the incoming traffic from the source agent and send the data blocks to the CIFS share. The gateway must be placed as close to the system sharing the folder over SMB as possible, especially in scenarios with a WAN connection. You should not create topologies with a proxy/gateway on one site and CIFS share on another site “in the cloud” — you will likely encounter periodic network failures.

The same load balancing considerations described previously apply to gateways as well. However, the gateway setup requires an additional attention because there are 2 options available — set the gateway explicitly or use an automatic selection mechanism:

Any Windows “managed server” can become a gateway for a CIFS share. Depending on the situation, both options can come handy. Let’s review them.

You can set the gateway explicitly. This option can simplify the resource management — there can be no surprises as to where the target agent will start. It is recommended to use this option if an access to the share is restricted to specific servers or in case of distributed environments — you don’t want your target agent to start far away from the server hosting the share!

Things become more interesting if you choose Automatic selection. If you are using several proxies, automatic selection gives ability to use more than one gateway and distribute the load. Automatic does not mean random though and there are indeed strict rules involved.

The target agent starts on the proxy that is doing the backup. In case of normal backup chains, if there are several jobs running in parallel and each is processed by its own proxy, then multiple target agents can start as well. However, within a single job, even if the VMs in the job are processed by several proxies, the target agent will start only on one proxy, the first to start processing. For per-VM backup chains, a separate target agent starts for each VM, so you can get the load distribution even within a single job.

Synthetic operations do not use proxies, so the selection mechanism is different: the target agent starts on the mount server associated with the repository (with an ability to fail over to Veeam server if the mount server in unavailable). This means that the load of synthetic operations will not be distributed across multiple servers. As mentioned above, we discourage from setting the number of tasks to unlimited — that can cause a huge load spike on the mount/Veeam server during synthetic operations.

Additional notes

Scale-out backup repositorySOBR is essentially a collection of usual repositories (called extents). You cannot point a backup job to a specific extent, only to SOBR, however extents retain some of settings, including the load control. So what was discussed about standalone repositories, pertains to SOBR extents as well. SOBR with per-VM option (enabled by default), the “Performance” placement policy and backup chains spread out across extents will be able to optimize the resource usage.

Backup copy. Instead of a proxy, source agents will start on the source repository. All considerations described above apply to source repositories as well (although in case of Backup Copy Job, synthetic operations on a source repository are logically not possible). Note that if the source repository is a CIFS share, the source agents will start on the mount server (with a failover to Veeam server).

Deduplication appliances. For DataDomain, StoreOnce (and possibly other appliances in the future) with Veeam integration enabled, the same considerations apply as for CIFS share repositories. For a StoreOnce repository with source-side deduplication (Low Bandwidth mode) the requirement to place gateway as close to the repository as possible does not apply — for example, a gateway on one site can be configured to send data to a StoreOnce appliance on another site over WAN.

Proxy affinity. A feature added in 9.5, proxy affinity creates a “priority list” of proxies that should be preferred when a certain repository is used.

If a proxy from the list is not available, a job will use any other available proxy. However, if the proxy is available, but does not have free task slots, the job will be paused waiting for free slots. Even though the proxy affinity is a very useful feature for distributed environments, it should be used with care, especially because it is very easy to set and forget about this option. Veeam Support encountered cases about “hanging” jobs which came down to the affinity setting that was enabled and forgotten about. More details on proxy affinity.

Conclusion

Whether you are setting up your backup infrastructure from scratch or have been using Veeam Backup & Replication for a long time, we encourage you to review your setup with the information from this blog post in mind. You might be able to optimize the use of resources or mitigate some pending risks!


This article was provided by our service partner veeam.com

Unsecure RDP Connections are a Widespread Security Failure

While ransomware, last year’s dominant threat, has taken a backseat to cryptomining attacks in 2018, it has by no means disappeared. Instead, ransomware has become a more targeted business model for cybercriminals, with unsecured remote desktop protocol (RDP) connections becoming the favorite port of entry for ransomware campaigns.

RDP connections first gained popularity as attack vectors back in 2016, and early success has translated into further adoption by cybercriminals. The SamSam ransomware group has made millions of dollars by exploiting the RDP attack vector, earning the group headlines when they shut down government sectors of Atlanta and Colorado, along with the medical testing giant LabCorp this year.

Think of unsecure RDP like the thermal exhaust port on the Death Star—an unfortunate security gap that can quickly lead to catastrophe if properly exploited. Organizations are inadequately setting up remote desktop solutions, leaving their environment wide open for criminals to penetrate with brute force tools. Cybercriminals can easily find and target these organizations by scanning for open RPD connections using engines like Shodan. Even lesser-skilled criminals can simply buy RDP access to already-hacked machines on the dark web.

Once a criminal has desktop access to a corporate computer or server, it’s essentially game over from a security standpoint. An attacker with access can then easily disable endpoint protection or leverage exploits to verify their malicious payloads will execute. There are a variety of payload options available to the criminal for extracting profit from the victim as well.

Common RDP-enabled threats

Ransomware is the most obvious choice, since it’s business model is proven and allows the perpetrator to “case the joint” by browsing all data on system or shared drives to determine how valuable it is and, by extension, how large of a ransom can be requested.

Cryptominers are another payload option, emerging more recently, criminals use via the RDP attack vector. When criminals breach a system, they can see all hardware installed and, if substantial CPU and GPU hardware are available, they can use it mine cryptocurrencies such as Monero on the hardware. This often leads to instant profitability that doesn’t require any payment action from the victim, and can therefore go by undetected indefinitely.

secure password

Solving the RDP Problem

The underlying problem that opens up RDP to exploitation is poor education. If more IT professionals were aware of this attack vector (and the severity of damage it could lead to), the proper precautions could be followed to secure the gap. Beyond the tips mentioned in my tweet above, one of the best solutions we recommend is simply restricting RDP to a whitelisted IP range.

However, the reality is that too many IT departments are leaving default ports open, maintaining lax password policies, or not training their employees on how to avoid phishing attacks that could compromise their system’s credentials. Security awareness education should be paramount as employees are often the weakest link, but can also be a powerful defense in preventing your organization from compromise.


This article was provided by our service partner : webroot.com

vsphere

Get your data ready for vSphere 5.5 End of Support

There have been lots of articles and walkthroughs on how to make that upgrade work for you, and how to get to a supported level of vSphere. This VMware article is very thorough walking through each step of the process.

But we wanted to touch on making sure your data is protected prior, during and after the upgrade events.

If we look at the best practice upgrade path for vSphere, we’ll see how we make sure we’re protected at each step along the way:

vSphere EOL

Upgrade Path

The first thing that needs to be considered is what path you’ll be taking to get away from the end of general support of vSphere 5.5. You have two options:

  • vSphere 6.5 which is now going to be supported till November 2021 (so another 5 years’ time)
  • vSphere 6.7 which is the latest released version from VMware.

Another consideration to make here is support for surrounding and ecosystem partners, including Veeam. Today, Veeam fully supports vSphere 6.5 and 6.7, however, vSphere 6.5 U2 is NOT officially supported with Veeam Backup & Replication Update 3a due to the vSphere API regression.

The issue is isolated to over-provisioned environments with heavily loaded hosts (so more or less individual cases).

It’s also worth noting that there is no direct upgrade path from 5.5 to 6.7. If you’re currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.

Management – VMware Virtual Center

The first step of the vSphere upgrade path after you’ve decided and found the appropriate version, is to make sure you have a backup of your vCenter server. The vSphere 5.5 virtual center could be a Windows machine or it could be using the VCSA.

Both variants can be protected with Veeam, however, the VCSA runs on a Postgres-embedded database. Be sure to take an image-level backup with Veeam and then there is a database backup option within the appliance. Details of the second step can be found in this knowledge base article.

If you’re an existing Veeam customer, you’ll already be protecting the virtual center as part of one of your existing backup jobs.

You must also enable VMware tools quiescence to create transactionally-consistent backups and replicas for VMs that do not support Microsoft VSS (for example, Linux VMs). In this case, Veeam Backup & Replication will use the VMware Tools to freeze the file system and application data on the VM before backup or replication. VMware Tools quiescence is enabled at the job level for all VMs added to the job. By default, this option is disabled.

vSphere EOL 02

You must also ensure Application-Aware Image Processing (AAIP) is either disabled or excluded for the VCSA VM.

vSphere EOL 03

Virtual Machine Workloads

If you are already a Veeam customer, then you’ll already have your backup jobs created and working with success before the upgrade process begins. However, as part of the upgrade process, you’ll want to make sure that all backup job processes that initiate through the virtual center are paused during the upgrade process.

If the upgrade path consists of new hardware but with no vMotion licensing, then the following section will help.

Quick Migration

Veeam Quick Migration enables you to promptly migrate one or more VMs between ESXi hosts and datastores. Quick Migration allows for the migration of VMs in any state with minimum disruption.

More information on Quick Migration can be found in our user guide.

During the upgrade process

As already mentioned in the virtual machine workloads section, it is recommended to stop all vCenter-based actions prior to update. This includes Veeam, but also any other application or service that communicates with your vCenter environment. It is also worth noting that whilst the vCenter is unavailable, vSphere Distributed Resource Scheduler (DRS) and vSphere HA will not work.

Veeam vSphere Web Client

If you’re moving to vSphere 6.7 and you have the Veeam vSphere Web Client installed as a vSphere plug-in, you’ll need to install the new vSphere Veeam web client plug-in from a post-upgraded Veeam Enterprise Manager.

vSphere EOL 04

More detail can be found in Anthony Spiteri’s blog post on new HTML5 plug-in functionality.

You’ll also need to ensure that any VMware-based products or other integrated products vCenter supports are the latest versions as you upgrade to a newer version of vSphere.

Final Considerations

From a Veeam Availability perspective, the above steps are the areas that we can help and make sure that you are constantly protected against failure during the process. Each environment is going to be different and other considerations will need to be made.

Another useful link that should be used as part of your planning: Update sequence for vSphere 5.5 and its compatible VMware products (2057795)

One last thing is a shout out to one of my colleagues who has done an in-depth look at the vSphere upgrade process.


This article was provided by our service partner : Veeam.com