How to create a Failover Cluster in Windows Server 2019

This article gives a short overview of how to create a Microsoft Windows Failover Cluster (WFC) with Windows Server 2019 or 2016. The result will be a two-node cluster with one shared disk and a cluster compute resource (computer object in Active Directory).

Windows server 2019 failover cluster

Preparation

It does not matter whether you use physical or virtual machines, just make sure your technology is suitable for Windows clusters. Before you start, make sure you meet the following prerequisites:

Two Windows 2019 machines with the latest updates installed. The machines have at least two network interfaces: one for production traffic, one for cluster traffic. In my example, there are three network interfaces (one additional for iSCSI traffic). I prefer static IP addresses, but you can also use DHCP.

failover cluster 02

Join both servers to your Microsoft Active Directory domain and make sure that both servers see the shared storage device available in disk management. Don’t bring the disk online yet.

The next step before we can really start is to add the “Failover clustering” feature (Server Manager > add roles and features).

Reboot your server if required. As an alternative, you can also use the following PowerShell command:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

After a successful installation, the Failover Cluster Manager appears in the start menu in the Windows Administrative Tools.

After you installed the Failover-Clustering feature, you can bring the shared disk online and format it on one of the servers. Don’t change anything on the second server. On the second server, the disk stays offline.

After a refresh of the disk management, you can see something similar to this:

Server 1 Disk Management (disk status online)


Server 2 Disk Management (disk status offline)

Failover Cluster readiness check

Before we create the cluster, we need to make sure that everything is set up properly. Start the Failover Cluster Manager from the start menu and scroll down to the management section and click Validate Configuration.

Select the two servers for validation.

Run all tests. There is also a description of which solutions Microsoft supports.

After you made sure that every applicable test passed with the status “successful,” you can create the cluster by using the checkbox Create the cluster now using the validated nodes, or you can do that later. If you have errors or warnings, you can use the detailed report by clicking on View Report.

Create the cluster

If you choose to create the cluster by clicking on Create Cluster in the Failover Cluster Manager, you will be prompted again to select the cluster nodes. If you use the Create the cluster now using the validated nodes checkbox from the cluster validation wizard, then you will skip that step. The next relevant step is to create the Access Point for Administering the Cluster. This will be the virtual object that clients will communicate with later. It is a computer object in Active Directory.

The wizard asks for the Cluster Name and IP address configuration.

As a last step, confirm everything and wait for the cluster to be created.

The wizard will add the shared disk automatically to the cluster per default. If you did not configure it yet, then it is also possible afterwards.

As a result, you can see a new Active Directory computer object named WFC2019.

You can ping the new computer to check whether it is online (if you allow ping on the Windows firewall).

As an alternative, you can create the cluster also with PowerShell. The following command will also add all eligible storage automatically:

New-Cluster -Name WFC2019 -Node SRV2019-WFC1, SRV2019-WFC2 -StaticAddress 172.21.237.32

You can see the result in the Failover Cluster Manager in the Nodes and Storage > Disks sections.

The picture shows that the disk is currently used as a quorum. As we want to use that disk for data, we need to configure the quorum manually. From the cluster context menu, choose More Actions > Configure Cluster Quorum Settings.

Here, we want to select the quorum witness manually.

Currently, the cluster is using the disk configured earlier as a disk witness. Alternative options are the file share witness or an Azure storage account as witness. We will use the file share witness in this example. There is a step-by-step how-to on the Microsoft website for the cloud witness. I always recommend configuring a quorum witness for proper operations. So, the last option is not really an option for production.

Just point to the path and finish the wizard.

After that, the shared disk is available for use for data.

Congratulations, you have set up a Microsoft failover cluster with one shared disk.

Next steps and backup

One of the next steps would be to add a role to the cluster, which is out of scope of this article. As soon as the cluster contains data, it is also time to think about backing up the cluster. Veeam Agent for Microsoft Windows can back up Windows failover clusters with shared disks. We also recommend doing backups of the “entire system” of the cluster. This also backs up the operating systems of the cluster members. This helps to speed up restore of a failed cluster node, as you don’t need to search for drivers, etc. in case of a restore.


This article was provided by our service partner : Veeam

Security

Is Being Secure and Being Compliant the Same Thing?

Often times, when I ask a partner if they’re offering security to their SMB customers, their answer revolves around consulting on compliance. Verticals like healthcare, financial, government, and retail are low-hanging fruit for security revenue opportunities because compliance is a requirement of being in business.

However, being secure and being compliant are NOT the same. Did you know that you can be compliant without being fully secure? While being compliant increases data protection and keeps organizations from paying hefty fines, it’s simply not enough. If that’s what you’re relying on to keep you and your customers safe, you’d be sorely mistaken.

Being compliant is like following a strict nutritionist-approved diet to stay healthy.

While that’s a good practice, and it will certainly help, it’s also very important that you know your family’s medical history and how that could impact your health in the future (your risks) so you can make necessary, and maybe even lifesaving decisions. If you ignored your risks and only stuck to a good diet, you might be blindsided at a doctor’s appointment to learn that you have a certain hereditary disease.

“If we had only caught this sooner…”

Many MSPs are approaching security when an incident occurs, while others are being proactive to meet their customer’s compliance requirements. They’re not thinking of the broader picture of risk. You need to fully understand your risks to ensure that you and your customers are secure. Don’t wait until disaster strikes.

Let’s dive into the differences between the two phrases.

Being Compliant

What does it mean to be compliant? Is that enough? Go on reading and see now what it is.

Regulatory compliance describes the goal that organizations aspire to achieve in their efforts to ensure they are aware of and take steps to comply with relevant laws, policies, and regulations, such as PCI, HIPAA, GDPR, and DFARS.

We’ve heard of several companies making news headlines regarding security breaches. The court will determine if there was negligence in adhering to regulations and taking the proper legally required steps to protect their data properly. If the company is found not to be compliant, there are heavy financial consequences.

How much are we talking? Yahoo’s loss of 3 billion user accounts cost them an estimated $350 million off their sales price.

Needless to say, there’s a big incentive for companies to cover the basics when it comes to security. However, if you stop at just being compliant, you’re essentially only doing the bare minimum, whatever is legally required.

It’s a starting point.

Being Secure

The next step is to ensure security. Go above and beyond.

According to Cisco, “Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes.”

When hackers attack your business, it’s not just your business that’s at stake. By getting access to your database, hackers gain access to all your customers. So, we could consider ensuring cybersecurity as a social responsibility (not just a legal one).

We believe in doing business this way, going above and beyond, and have adopted the NIST Cybersecurity Framework. It consists of standards, guidelines, and best practices to manage cybersecurity-related risks as an ongoing practice.

As leaders in the IT industry, we’re all constantly looking to others who are doing things well and subscribe to best practices in several other areas of business. Cybersecurity is no different.

The framework encourages identifying your risks proactively, so you can take the necessary steps in reducing and managing your risks.

How to Assess Risks

We know what you’re thinking, “Easier said than done, though, right? Just another thing to add to my to-do list.”

This process doesn’t have to be overwhelming. Knowing where to start is half the battle. Smart security offerings start with a risk assessment that allows you to proactively identify security risks across your entire business as well as your customers, not just on their network. The result is an easy-to-understand, customized risk report showing your customer their most critical risks and recommendations for how to remediate those risks.

Next Steps

The bottom line: be compliant AND secure. Start by understanding your legal compliance responsibilities to protect yourself and your customers during a disaster. Then, take it a step further—assess and fully understand your security risks and develop a plan to reduce your risks.


This article was provided by our service partner : Connectwise

vmware vsphere

10 Things To Know About vSphere Certificate Management

With security and compliance on the minds of IT staff everywhere, vSphere certificate management is a huge topic. Decisions made can seriously affect the effort it takes to support a vSphere deployment, and often create vigorous discussions between CISO and information security staff, virtualization admins, and enterprise PKI/certificate authority admins. Here are ten things that organizations should consider when choosing a path forward.

1. Certificates are about encryption and trust

Certificates are based on public key cryptography, a technique developed by mathematicians in the 1970s, both in the USA and Britain. These techniques allow someone to create two mathematical “keys,” public and private. You can share the public key with another person, who can then use it to encrypt a message that can only be read by the person with the private key.

When we think about certificates we often think of the little padlock icon in our browser, next to the URL. Green and locked means safe, and red with an ‘X’ and a big “your connection is not private” warning means we’re not safe, right? Unfortunately, it’s a lot more complicated than that. A lot of things need to be true for that icon to turn green.

When we’re using HTTPS the communications between our web browser and a server are sent across a connection protected with Transport Layer Security (TLS). TLS is the successor to Secure Sockets Layer, or SSL, but we often refer to them interchangeably. TLS has four versions now:

  • Version 1.0 has vulnerabilities, is insecure, and shouldn’t be used anymore.
  • Version 1.1 doesn’t have the vulnerabilities as 1.0 but it uses MD5 and SHA-1 algorithms which are both insecure.
  • Version 1.2 adds AES cryptographic ciphers that are faster, removes some insecure ciphers, and switches to SHA-256. It is the current standard.
  • Version 1.3 removes weak ciphers and adds features that increase the speed of connections. It is the upcoming standard.

Using TLS means that your connection is encrypted, even if the certificates are self-signed and generate warnings. Signing a certificate means that someone vouches for that certificate, in much the same way as a trusted friend would introduce someone new to you. A self-signed certificate simply means that it’s vouching for itself, not unlike a random person on the street approaching you and telling you that they are trustworthy. Are they? Maybe, but maybe not. You don’t know without additional information.

To get the green lock icon you need to trust the certificate by trusting who signed it. This is where a Certificate Authority (CA) comes in. Certificate Authorities are usually specially selected and subject to rigorous security protocols, because they’re trusted implicitly by web browsers. Having a CA sign a certificate means you inherit the trust from the CA. The browser lock turns green and everything seems secure.

Having a third-party CA sign certificates can be expensive and time-consuming, especially if you need a lot of them (and nowadays you do). As a result, many enterprises create their own CAs, often using the Microsoft Active Directory Certificate Services, and teach their web browsers and computers to trust certificates signed by that CA by importing the “root CA certificates” into the operating systems.

2. vSphere uses certificates extensively

All communications inside vSphere are protected with TLS. These are mainly:

  • ESXi certificates, issued to the management interfaces on all the hosts.
  • “Machine” SSL certificates used to protect the human-facing components, like the web-based vSphere Client and the SSO login pages on the Platform Service Controllers (PSCs).
  • “Solution” user certificates used to protect the communications of other products, like vRealize Operations Manager, vSphere Replication, and so on.

The vSphere documentation has a full list. The important point here is that in a fully-deployed cluster the number of certificates can easily reach into the hundreds.

3. vSphere has a built-in certificate authority

Managing hundreds of certificates can be quite a daunting task, so VMware created the VMware Certificate Authority (VMCA). It is a supported and trusted component of vSphere that runs on a PSC or on the vCenter VCSA in embedded mode. Its job is to automate the management of certificates that are used inside a vSphere deployment. For example, when a new host is attached to vCenter it asks you to verify the thumbprint of the host ESXi certificate, and once you confirm it’s correct the VMCA will automatically replace the certificates with ones issued by the VMCA itself. A similar thing happens when additional software, like vRealize Operations Manager or VMware AppDefense is installed.

The VMCA is part of the vCenter infrastructure and is trusted in the same way vCenter is. It’s patched when you patch your PSCs and VCSAs. It is sometimes criticized as not being a full-fledged CA but it is just-enough-CA, purpose-built to serve a vSphere deployment securely, safely, and in an automated way to make it easy to be secure.

4. There are four main ways to manage certificates

  1. First, you can just use a self-signed CA certificate. The VMCA is fully-functional once vCenter is installed and automatically creates root certificates to use for signing ESXi, machine, and solution certificates. You can download the root certificates from the main vCenter web page and import them into your operating systems to establish trust and turn the browser lock icon green for both vCenter and ESXi. This is the easiest solution but it requires you to accept a self-signed CA root certificate. Remember, though, that we trust vCenter, so we trust the VMCA.
  2. Second, you can make the VMCA an intermediate, or subordinate, CA. We do not recommend this (see below).
  3. Third, you can disable the VMCA and use custom certificates for everything. To do this you can ask the certificate-manager tool to generate Certificate Signing Requests (CSRs) for everything. You take those to a third-party CA, have them signed, and then install them all manually. This is time-consuming and error-prone.
  4. Fourth, you can use “hybrid” mode to replace the machine certificates (the human-facing certificates for vCenter) with custom certificates, and let the VMCA manage everything else with its self-signed CA root certificates. All users of vCenter would then see valid, trusted certificates. If the virtualization infrastructure admin team desires they can import the CA root certificates to just their workstations and then they’ll have green lock icons for ESXi, too, as well as warnings if there is an untrusted certificate. This is the recommended solution for nearly all customers because it balances the desire for vCenter security with the realities of automation and management.

5. Enterprise CAs are self-signed, too

“But wait,” you might be thinking, “we are trying to get rid of self-signed certificates, and you’re advocating their use.” True, but think about it this way: enterprise CAs are self-signed, too, and you have decided to trust them. Now you simply have two CAs, and while that might seem like a problem it really means that a separation exists between the operators of the enterprise CA and the virtualization admin team, for security, organizational politics, staff workload management, and even troubleshooting. Because we trust vCenter, as the core of our virtualization management, we also implicitly trust the VMCA.

6. Don’t create an intermediate CA

You can create an intermediate CA, also known as a subordinate CA, by issuing the VMCA a root CA certificate capable of signing certificates on behalf of the enterprise CA and using the Certificate Manager to import it. While this has applications, it is generally regarded as unsafe because anybody with access to that CA root key pair can now issue certificates as the enterprise CA. We recommend maintaining the security & trust separation between the enterprise CA and the VMCA and not using the intermediate CA functionality.

7. You can change the information on the self-signed CA root certificate

Using the Certificate Manager utility you can generate new VMCA root CA certificates with your own organizational information in them, and the tool will automate the reissue and replacement of all the certificates. This is a popular option with the Hybrid mode, as it makes the self-signed certificates customized and easy to identify. You can also change the expiration dates if you dislike the defaults.

8. Test, test, test!

The only way to truly be comfortable with these types of changes is to test them first. The best way to test is with a nested vSphere environment, where you install a test VCSA as well as ESXi inside a VM. This is an incredible way to test vSphere, especially if you shut it down and take a snapshot of it. Then, no matter what you do, you can restore the test environment to a known good state. See the links at the end for more information on nested ESXi.

Another interesting option is using the VMware Hands-on Labs to experiment with this. Not only are the labs a great way to learn about VMware products year-round, they’re also great for trying unscripted things out in a low-risk way. Try the new vSphere 6.7 Lightning Lab!

9. Make backups

When the time comes to do this for real make sure you have a good file-based backup of your vCenter and PSCs using the VAMI interface. Additionally, the Certificate Manager utility backs up the old certificates, so you can restore them if needed (only one set, though, so think that through). This way you can restore them if things go wrong. If things do not go as planned or tested know that these operations are fully supported by VMware Global Support Services, who can walk you through resolving any problem you might encounter.

10. Know why you’re doing this

In the end the choice of how you manage vSphere certificates depends on what your goals are.

  • Do you want that green lock icon?
  • Does everybody need the green lock icon for ESXi, or just the virtualization admin team?
  • Do you want to get rid of self-signed certificates, or are you more interested in establishing trust?
  • Why do you trust vCenter as the core of your infrastructure but not a subcomponent of vCenter?
  • What is the difference in trust between the enterprise self-signed CA root and the VMCA self-signed CA root?
  • Is this about compliance, and does the compliance framework truly require custom CA certificates?
  • What is the cost, in staff time and opportunity cost, of ignoring the automated certificate solution in favor of manual replacements?
  • Does the solution decrease or increase risk, and why?

Whatever you decide know that thousands of organizations across the world have asked the same questions, and out of the discussions have come good understandings of certificates & trust as well as better relations between security and virtualization admin teams.


This article was provided by our service partner : Vmware

veeam

Veeam : Set up vSphere RBAC for self-service backup portal

Wouldn’t it be great to empower VMware vSphere users to take control of their backups and restores with a self-service portal? The good news is you can as of Veeam Backup & Replication 9.5 Update 4. This feature is great because it eliminates operational overhead and allows users to get exactly what they want when they want it. It is a perfect augmentation for any development team taking advantage of VMware vSphere virtual machines.

Introducing vSphere role-based access control (RBAC) for self-service

vSphere RBAC allows backup administrators to provide granular access to vSphere users using the vSphere permissions already in place. If a user does not have permissions to virtual machines in vCenter, they will not be able to access them via the Self-Service Backup Portal.

Additionally, to make things even simpler for vSphere users, they can create backup jobs for their VMs based on pre-created job templates. They will not have to deal with advanced settings they are not familiar with (This is a really big deal by the way).vSphere users can then monitor and control the backup jobs they have created using the Enterprise Manager UI, and restore their backups as needed.

Setting up vSphere RBAC for self-service

Setting up vSphere RBAC for self-service could not be easier. In the Enterprise Manager configuration screen, a Veeam administrator simply has to navigate to “Configuration – Self-service.” Then, he should add the vSphere user’s account, specify a backup repository, set a quota, and select the delegation method. These permissions can also be applied at the group level for enhanced ease of administration too.

Besides VMware vCenter Roles, vSphere privileges or vSphere tags can be used as the delegation method. vSphere tags is one of my favorite methods to use since tags can be applied to either reach a very broad or very granular set of permissions. The ability to use vSphere tags is especially helpful for new VMware vSphere deployments, since it provides quick, easy, and secure access to virtual machine users for this case.

For example, I could set vSphere tags at a vSphere cluster level if I had a development cluster, or I could set vSphere tags on a subset of virtual machines using a tag such as “KryptonSOAR Development” to only provide access to development virtual machines.

After setting the Delegation Mode, the user account can be edited to select the vSphere tag, vCenter server role, or VM privilege. From the Edit screen, the repository and quota can also be changed at any time if required.

Using RBAC for VMware vSphere

After this very simple configuration, vSphere users simply need to log into the Self-Service Backup Portal to begin protecting and recovering their virtual machines. The URL can be shared across the entire organization: https://<EnterpriseManagerServer>:9443/backup, thus giving everyone a very convenient way of managing their workloads. Job creation and viewing in the Self-Service Backup Portal is extremely user friendly, even for those who have never backed up a virtual machine before! When creating a new backup job, users will only see the virtual machines they have access to, which makes the solution more secure and less confusing.

There is even a helpful dashboard, so users can monitor their backup jobs and the amount of backup storage they are consuming.

Enabling vSphere users to back up and restore virtual machines empowers them in new ways, especially when it comes to DevOps and rapid development cycles. Best of all, Veeam’s self-service implementation leverages the VMware vSphere permissions framework organizations already have in place, reducing operational complexity for everyone involved.

When it comes to VM recovery, there are also many self-service options available. Users can independently navigate to “VMs” tab to perform full VM restores. Again, the process is very easy as the user should decide whether to preserve the original VM if Veeam detects it or to overwrite its data, select the desired restore point, and specify whether it should be powered on after this procedure. Three simple actions and the data is on its way.

In addition to that, the portal makes file- and application-level recovery very convenient too. There are quite a few scenarios available and what’s really great about it is that users can navigate into the file system tree via the file explorer. They can utilize a search engine with advanced filters for both indexed and non-indexed guest OS file systems. Under the hood, Veeam is going to decide how exactly the operation should be handled but the user won’t even know about it. There is no chance the sought-for document can slip here. The cherry on top is that Veeam provides recovery of application-aware SQL and Oracle backups, thus making your DBAs happy without giving them too many rights for the virtual environments.


This article was provided by our service partner : Veeam

managed services

Managed Services 101: Where MSPs Are Now, and Where They’re Going

Managed services are becoming an increasingly integral part of the business IT ecosystem. With technology advancing at a rapid pace, many companies find it cheaper and more effective to outsource some or all of their IT processes and functions to an expert provider, known as a managed service provider (MSP).

Unlike traditional on-demand IT outsourcing, MSPs proactively support a company’s IT needs. And with the IT demands of businesses becoming ever more complex, reliance on MSPs is likely to increase exponentially over the next few years.

What Is a Managed Service Provider?

An MSP manages a company’s IT infrastructure on a subscription-based model. MSPs offer continual support that can include the setup, installation, and configuration of a company’s IT assets.

Managed services can supplement a company’s internal IT department and provide services that may not be available in-house. And since the MSP is continuously supporting the company’s IT infrastructure and systems, rather than simply stepping in from time to time to put out a fire, these services can provide a level of peace of mind that other models just can’t match.

What’s the Difference Between Managed Services and the Break/Fix Model?

Unlike on-demand outsourced IT services, managed services play an ongoing and harmonious role in the running of an organization.

Due to the rapidly changing nature of the digital landscape, it’s no longer sustainable to fix problems after the damage is done. Yet the break/fix model in softwarre development companies like Software Development UK, is still a common way of dealing with IT-related problems. It’s like waiting to repair a minor leak until after the pipe has burst.

On-demand providers are usually brought in to perform a specific service (like fixing a broken server), and they bill the customer for the time and materials it takes to provide that service. MSPs, on the other hand, charge a recurring fee to provide an ongoing service. This service is defined in the service-level agreement (SLA), a contract drawn up between the MSP and the customer that defines both the type and standards of services the MSP will be expected to provide. This monthly recurring revenue (MRR) can provide a lucrative and reliable revenue stream.

What Services Can an MSP Provide?

MSPs provide systems management solutions, centrally managing a company’s IT assets. This encompasses everything from software support and maintenance to cloud computing and data storage. These solutions can be especially valuable for small- and medium-sized businesses (SMBs) that may not have robust internal IT departments, especially when it comes to hard-to-find skills.

Network Monitoring and Maintenance

From slow loading times to outages, inefficient and faulty systems can cost companies a fortune in lost productivity. MSPs reduce the likelihood of such delays by keeping an eye on the network for slow or failing elements. By using a remote monitoring and management (RMM) tool, the MSP will automatically be notified the moment an issue arises, allowing them to identify and fix the problem as quickly as possible. That means shorter downtime, so the customer’s tech—and the business needs it supports—can get up and running again in no time.

Software Support and Maintenance

MSPs provide software support and maintenance to ensure the smooth running of all business applications that a customer needs on a daily basis. This includes ensuring that the programs used to maintain the network are fully functional. Overall, the goal is to provide an uninterrupted experience so that work can carry on as normal.

Data Backup and Recovery

Data loss can be catastrophic, so companies need to have a system in place to back it up and recover it, should the worst happen. MSPs can handle the backup process, protecting companies against both accidental deletion and file corruption, or more malicious intent (like cyberattacks). They can also support a company’s overall disaster recovery plan, ensuring the business can always recover its data in the event of an emergency.

Data Storage

MSPs can also help their clients optimally store their data. While hard data storage was once standard, new forms of remote data storage are growing in popularity, including cloud computing. MSPs can enable seamless data migration if the client decides to switch storage options.

Cloud Computing

Cloud computing encompasses more than just remote data storage options. Various IT applications and resources can be accessed via online cloud service platforms, with providers charging a pay-as-you-go fee for access. Whether the client relies on a public, private, or hybrid cloud platform, MSPs can help them navigate the cloud successfully, streamlining their workflows, storing data successfully, and more.

Challenges Facing MSPs

While there are numerous benefits to the managed services model, including the recurring revenue and the ability to build long-lasting relationships with clients, this model isn’t without its challenges.

Shifts in Sales and Marketing

Until recently, many MSPs have grown organically through referrals and word of mouth. But increasingly, companies are seeing the value of the ‘master MSP’ model, which offers valuable infrastructure to other MSPs in areas where their own expertise may be lacking. As a result, we see a trend toward inorganic growth.

In this market, MSPs can stand out from the crowd by investing their efforts in product management. Prioritizing the needs of the customer is a simple way to create value around your services. This goes beyond the basic standards outlined in the service level agreement—it’s about showing you go above and beyond.

Keeping Existing Customers

With new differentiators emerging, MSPs have to adjust their approach to keep customers happy.

One way they can set themselves apart is by having business conversations very early on in the relationship. By gaining a clear understanding of the outcomes the client wants to achieve and working with them to come to an agreement surrounding expectations, MSPs can establish themselves as a partner rather than simply a provider. This will allow you to adjust your approach to match their needs—like driving for profit rather than acting as a cost center.

Best-in-class MSPs also rarely find themselves arguing with customers over whether something is covered. That’s because they’re fully aligned on what the MSP is responsible for. Whatever the SLA covers, it’s the MSP’s job to ensure their client understands. This requires regular conversations to confirm everyone is on the same page and satisfied. Documenting these conversations also allows MSPs to streamline any disagreements by showing what has been discussed and agreed upon. The goal is to become a trusted advisor that they turn to for guidance.

A next-level approach to proactivity is also a plus. This includes setting up alerts to rapidly identify issues and putting new measures in place to ensure mistakes don’t repeat themselves.

Transitioning toward a more risk-based approach, bolstered by a security-first mindset, will go a long way, opening doors for both more recurring and non-recurring revenue streams as clients seek out your consultation. The best MSPs are experts at assessing their customers’ environment and developing a tailored plan that covers governance, compliance, and ongoing risk management. What’s more, they adjust their approach regularly to reflect the ever-changing security needs of their clients—offering more opportunities to showcase their value and up their revenue stream.

The Impact of Cloud Computing

While MSP revenue is rising, profit margins are actually shrinking. Part of the problem is the fact that MSPs are expanding their portfolio of services, yet still relying on their former pricing structures. But many MSPs are making the problem worse by choosing the wrong cloud service vendor to partner with, which can significantly impact an MSPs already-shrinking profit margins.

Some cloud service vendors are simply not priced to support an MSP. And with the pace at which cloud technology is evolving, a process that was cutting-edge when an MSP implemented it could become inefficient within a period of weeks. It’s vital that MSPs be open to change if a vendor becomes unsustainable, lest risk their own services becoming unsustainable as a result.

You should also be ready to address any cloud-related questions and concerns that clients raise. Cloud technology is still relatively new, and it can be confusing, so overcoming any uncertainties will play a key role in an MSP’s ability to act as a valuable advisor to its clients.

How MSPs Use Software

Just as they bring value to their customers by streamlining workflows and protecting networks, MSPs need internal frameworks that increase efficiency.

Professional services automation (PSA) tools allow MSPs to streamline and automate repetitive administrative tasks. This saves time and cuts costs, all while enabling greater scalability.

MSPs can also utilize remote monitoring and management (RMM) tools. These automate the patching process and allow you to reduce time spent on resolving tickets, essentially doing more with less. Not only does this enable a more proactive approach, but it puts time back into the support team’s day to focus on other things.

Needless to say, MSPs should be easily accessible to their clients via technology. Remote desktop support makes that possible. With remote control over a client’s systems, MSPs can rapidly solve issues from wherever they are—without interfering with the end user’s access. This reduces customer downtime, allowing repairs and IT support to happen quietly in the background.

What the Future Holds for MSPs

The role of MSPs is changing. Keeping an eye on these emerging trends can help you anticipate shifting client expectations—and stay ahead of the curve.

Arguably the largest area of opportunity for MSPs is cybersecurity—and that service is only going to grow more valuable. Even as awareness increases and regulations tighten around data privacy laws, the number and complexity of cyberattacks continue to rise. Between 2017 and 2018, the annual cost of combating cybercrime rose by 12%—from $11.7 million to a record high of $13 million—so establishing yourself as a cybersecurity expert now will put you in good stead for the future.

The Internet of Things (IoT) is also going to have a major impact on MSPs. Keeping up with the sheer volume of devices being used on a day-to-day basis requires a dynamic approach to systems management. This includes being proactive about establishing best practices and security guidelines around new technology, such as the use of voice assistants.

Business intelligence offerings are also likely to grow in demand. With the use of IT in business at an all-time high, the amount of data being generated is enormous. But data is only numbers without someone to effectively consolidate and analyze it to extract actionable insights. Providing easy access to reports and KPIs that clearly demonstrate areas for improvement will allow MSPs to not only stay relevant in this data-driven market but become leaders in their field.


This article was provided by our service partner : connectwise.com

EternalBlue reaching new heights since WannaCryptor outbreak

Attack attempts involving the exploit are in hundreds of thousands daily

It has been two years since EternalBlue opened the door to one of the nastiest ransomware outbreaks in history, known as WannaCryptor (or WannaCry). Since the now-infamous malware incident, attempts to use the exploit have only been growing in prevalence. Currently it is at the peak of its popularity, with users bombarded with hundreds of thousands of attacks every day.

The EternalBlue exploit was allegedly stolen from the National Security Agency (NSA) in 2016 and leaked online on April 14, 2017 by a group known as Shadow Brokers. The exploit targets a vulnerability in Microsoft’s implementation of the Server Message Block (SMB) protocol, via port 445. The flaw had been privately disclosed to and patched by Microsoft even before the WannaCryptor outbreak in 2017; yet, despite all efforts, vulnerable systems are widespread even to this day.

According to data from Shodan, there are currently almost a million machines in the wild using the obsolete SMB v1 protocol, exposing the port to the public internet. Most of these devices are in the United States, followed by Japan and the Russian Federation.

Poor security practices and lack of patching are likely reasons why malicious use of the EternalBlue exploit has been growing continuously since the beginning of 2017, when it was leaked online.

Based on ESET telemetry, attack attempts involving EternalBlue are reaching historical peaks, with hundreds of thousands of instances being blocked every day, as seen in Figure 1.

A similar trend can be observed by looking at the number of unique ESET clients reporting thousands of attempts to use the exploit daily, as seen in Figure 2.


Besides malicious use, EternalBlue numbers might also be growing due to its use for internal security purposes. As one of the most prevalent malicious tools, this exploit can be used by company security departments as a means for vulnerability hunting within corporate networks.

EternalBlue has enabled many high-profile cyberattacks. Apart from WannaCryptor, it also powered the destructive Diskcoder.C (aka Petya, NotPetya and ExPetya) campaign and the BadRabbit ransomware campaign in 2017. Well-known cyberespionage actors such as Sednit (aka APT28, Fancy Bear and Sofacy) were also caught using it against hotel Wi-Fi networks.

EternalBlue was also recently seen spreading Trojans and cryptomining malware in China – a return to what the vulnerability was first seen used for, even before the WannaCryptor outbreak – and was advertised by the black hats as the spreading mechanism for a new Ransomware-as-a-Service Yatron.

This exploit and all the cyberattacks it enabled so far highlight the importance of timely patching. Moreover, it emphasizes the need for a reliable and multi-layered security solution that can do more than just stop the malicious payload, such as protect against the underlying mechanism.


This article was provided by our service partner : eset

cloud services

Cloud Services in the Crosshairs of Cybercrime

It’s a familiar story in tech: new technologies and shifting preferences raise new security challenges. One of the most pressing challenges today involves monitoring and securing all of the applications and data currently undergoing a mass migration to public and private cloud platforms.

Malicious actors are motivated to compromise and control cloud-hosted resources because they can gain access to significant computing power through this attack vector. These resources can then be exploited for a number of criminal money-making schemes, including cryptomining, DDoS extortion, ransomware and phishing campaigns, spam relay, and for issuing botnet command-and-control instructions. For these reasons—and because so much critical and sensitive data is migrating to cloud platforms—it’s essential that talented and well-resourced security teams focus their efforts on cloud security.

The cybersecurity risks associated with cloud infrastructure generally mirror the risks that have been facing businesses online for years: malware, phishing, etc. A common misconception is that compromised cloud services have a less severe impact than more traditional, on-premise compromises. That misunderstanding leads some administrators and operations teams to cut corners when it comes to the security of their cloud infrastructure. In other cases, there is a naïve belief that cloud hosting providers will provide the necessary security for their cloud-hosted services.

Although many of the leading cloud service providers are beginning to build more comprehensive and advanced security offerings into their platforms (often as extra-cost options), cloud-hosted services still require the same level of risk management, ongoing monitoring, upgrades, backups, and maintenance as traditional infrastructure. For example, in a cloud environment, egress filtering is often neglected. But, when egress filtering is invested in, it can foil a number of attacks on its own, particularly when combined with a proven web classification and reputation service. The same is true of management access controls, two-factor authentication, patch management, backups, and SOC monitoring. Web application firewalls, backed by commercial-grade IP reputation services, are another often overlooked layer of protection for cloud services.

Many midsize and large enterprises are starting to look to the cloud for new wide-area network (WAN) options. Again, here lies a great opportunity to enhance the security of your WAN, whilst also achieving the scalability, flexibility, and cost-saving outcomes that are often the primary goals of such projects.  When selecting these types of solutions, it’s important to look at the integrated security options offered by vendors.

Haste makes waste

Another danger of the cloud is the ease and speed of deployment. This can lead to rapidly prototyped solutions being brought into service without adequate oversight from security teams. It can also lead to complacency, as the knowledge that a compromised host can be replaced in seconds may lead some to invest less in upfront protection. But it’s critical that all infrastructure components are properly protected and maintained because attacks are now so highly automated that significant damage can be done in a very short period of time. This applies both to the target of the attack itself and in the form of collateral damage, as the compromised servers are used to stage further attacks.

Finally, the utilitarian value of the cloud is also what leads to its higher risk exposure, since users are focused on a particular outcome (e.g. storage) and processing of large volumes of data at high speeds. Their solutions-based focus may not accommodate a comprehensive end-to-end security strategy well. The dynamic pressures of business must be supported by newer and more dynamic approaches to security that ensure the speed of deployment for applications can be matched by automated SecOps deployments and engagements.

Time for action

If you haven’t recently had a review of how you are securing your resources in the cloud, perhaps now is a good time. Consider what’s allowed in and out of all your infrastructure and how you retake control. Ensure that the solutions you are considering have integrated, actionable threat intelligence for another layer of defense in this dynamic threat environment.


This article was provided by our service partner : webroot.com

Veeam’s Office 365 backup

It is no secret anymore, you need a backup for Microsoft Office 365! While Microsoft is responsible for the infrastructure and its availability, you are responsible for the data as it is your data. And to fully protect it, you need a backup. It is the individual company’s responsibility to be in control of their data and meet the needs of compliance and legal requirements. In addition to having an extra copy of your data in case of accidental deletion, here are five more reasons WHY you need a backup.

Office 365 backup 1

With that quick overview out of the way, let’s dive straight into the new features.

Increased backup speeds from minutes to seconds

With the release of Veeam Backup for Microsoft Office 365 v2, Veeam added support for protecting SharePoint and OneDrive for Business data. Now with v3, we are improving the backup speed of SharePoint Online and OneDrive for Business incremental backups by integrating with the native Change API for Microsoft Office 365. By doing so, this speeds up backup times up to 30 times which is a huge game changer! The feedback we have seen so far is amazing and we are convinced you will see the difference as well.

Improved security with multi-factor authentication support

Multi-factor authentication is an extra layer of security with multiple verification methods for an Office 365 user account. As multi-factor authentication is the baseline security policy for Azure Active Directory and Office 365, Veeam Backup for Microsoft Office 365 v3 adds support for it. This capability allows Veeam Backup for Microsoft Office 365 v3 to connect to Office 365 securely by leveraging a custom application in Azure Active Directory along with MFA-enabled service account with its app password to create secure backups.

Office 365 backup 2

From a restore point of view, this will also allow you to perform secure restores to Office 365.

Office 365 backup 3

Veeam Backup for Microsoft Office 365 v3 will still support basic authentication, however, using multi-factor authentication is advised.

Enhanced visibility

By adding Office 365 data protection reports, Veeam Backup for Microsoft Office 365 will allow you to identify unprotected Office 365 user mailboxes as well as manage license and storage usage. Three reports are available via the GUI (as well as PowerShell and RESTful API).

License Overview report gives insight in your license usage. It shows detailed information on licenses used for each protected user within the organization. As a Service Provider, you will be able to identify the top five tenants by license usage and bring the license consumption under control.

Storage Consumption report shows how much storage is consumed by the repositories of the selected organization. It will give insight on the top-consuming repositories and assist you with daily change rate and growth of your Office 365 backup data per repository.

Office 365 backup 4

Mailbox Protection report shows information on all protected and unprotected mailboxes helping you maintain visibility of all your business-critical Office 365 mailboxes. As a Service Provider, you will especially benefit from the flexibility of generating this report either for all tenant organizations in the scope or a selected tenant organization only.

Office 365 backup 5

Simplified management for larger environments

Microsoft’s Extensible Storage Engine has a file size limit of 64 TB per year. The workaround for this, for larger environments, was to create multiple repositories. Starting with v3, this limitation and the manual workaround is eliminated! Veeam’s storage repositories are intelligent enough to know when you are about to hit a file size limit, and automatically scale out the repository, eliminating this file size limit issue. The extra databases will be easy to identify by their numerical order, should you need it:

Office 365 backup 6

Flexible retention options

Before v3, the only available retention policy was based on items age, meaning Veeam Backup for Microsoft Office 365 backed up and stored the Office 365 data (Exchange, OneDrive and SharePoint lists items) which was created or modified within the defined retention period.

Item-level retention works similar to how classic document archive works:

  • First run: We collect ALL items that are younger (attribute used is the change date) than the chosen retention (importantly, this could mean that not ALL items are taken).
  • Following runs: We collect ALL items that have been created or modified (again, attribute used is the change date) since the previous run.
  • Retention processing: Happens at the chosen time interval and removes all items where the change date became older than the chosen retention.

This retention type is particularly useful when you want to make sure you don’t store content for longer than the required retention time, which can be important for legal reasons.

Starting with Veeam Backup for Microsoft Office 365 v3, you can also leverage a “snapshot-based” retention type option. Within the repository settings, v3 offers two options to choose from: Item-level retention (existing retention approach) and Snapshot-based retention (new).

Snapshot-based retention works similar to image-level backups that many Veeam customers are so used to:

  • First run: We collect ALL items no matter what the change date is. Thus, the first backup is an exact copy (snapshot) of an Exchange mailbox / OneDrive account / SharePoint site state as it looks at that point in time.
  • Following runs: We collect ALL new items that have been created or modified (attribute used here is the change date) since the previous run. Which means that the backup represents again an exact copy (snapshot) of the mailbox/site/folder state as it looks at that point in time.
  • Retention processing: During clean-up, we will remove all items belonging to snapshots of mailbox/site/folder that are older than the retention period.

Retention is a global setting per repository. Also note that once you set your retention option, you will not be able to change it.

Other enhancements

As Microsoft released new major versions for both Exchange and SharePoint, we have added support for Exchange and SharePoint 2019. We have made a change to the interface and now support internet proxies. This was already possible in previous versions by leveraging a change to the XML configuration, however, starting from Veeam Backup for Microsoft Office 365 v3, it is now an option within the GUI. As an extra, you can even configure an internet proxy per any of your Veeam Backup for Microsoft Office 365 remote proxies.  All of these new options are also available via PowerShell and the RESTful API for all the automation lovers out there.

Office 365 backup 7

On the point of license capabilities, we have added two new options as well:

  • Revoking an unneeded license is now available via PowerShell
  • Service Providers can gather license and repository information per tenant via PowerShell and the RESTful API and create custom reports

To keep a clean view on the Veeam Backup for Microsoft Office 365 console, Service Providers can now give organizations a custom name.

Office 365 backup 8

Based upon feature requests, starting with Veeam Backup for Microsoft Office 365 v3, it is possible to exclude or include specific OneDrive for Business folders per job. This feature is available via PowerShell or RESTful API. Go to the What’s New page for a full list of all the new capabilities in Veeam Backup for Microsoft Office 365.


This article was supplied by our service partner : veeam.com

Why Simplified Security Awareness Training Matters for MSPs and SMBs

In a recent report by the firm 451 Research, 62 percent of SMBs reported having a security awareness training program in place for their employees, with half being “homegrown” training courses. The report also found that most complained their programs were difficult to implement, track, and manage.

Like those weights in the garage you’ve been meaning to lift or the foreign language textbook you’ve been meaning to study, even our most well-intentioned efforts flounder if we’re not willing to put to use the tools that can help us achieve our goals.

So it goes with cybersecurity training. If it’s cumbersome to deploy and manage, or isn’t able to clearly display its benefits, it will be cast aside like so many barbells and Spanish-language dictionaries. But unfortunately, until now, centralized management and streamlined workflows across client sites have eluded the security awareness training industry.

The Importance of Effective Security Awareness Training

The effectiveness of end user cybersecurity training in preventing data breaches and downtime has been demonstrated repeatedly. Webroot’s own research found security awareness training cut clicks on phishing links by 70 percent, when delivered with regularity. And according to the 2018 Data Breach Investigation Report by Verizon, 93 percent of all breaches were the result of social engineering attacks like phishing.

With the average cost of a breach at around $3.62 million, low-overhead and effective solutions should be in high demand. But while 76 percent of MSPs reported using some type of security awareness tool, many still rely on in-house solutions that are siloed from the rest of their cybersecurity monitoring and reporting.

“MSPs should consider security awareness training from vendors with cybersecurity focus and expertise, and who have deep visibility and insights into the changing threat landscape,” says 451 Research Senior Analyst Aaron Sherrill.

“Ideally, training should be integrated into the overall security services delivery platform to provide a unified and cohesive approach for greater efficacy.”

Simple Security Training is Effective Security Training

Security awareness training that integrates with other cybersecurity solutions—like DNS and endpoint protection—is a good first step in making sure the material isn’t brushed aside like other implements of our best intentions.

Global management of security awareness training—the ability to initiate, monitor, and report on the effectiveness of these programs from a single pane of glass across all of your customers —is the next.

When MSPs can save time by say, rolling out a simulated phishing campaign or training course to one, many or allclient’s sites across the globe with only a few clicks, they both save time and money in management overhead, and are more likely to offer it as a service to their clients. Everyone wins.

With a console that delivers intuitive monitoring of click-through rates for phishing campaigns or completion rates for courses like compliance training, across all client sites, management is simplified. And easily exportable phishing and campaign reports help drive home a client’s progress.

“Automation and orchestration are the force multipliers MSPs need to keep up with today’s threats and provide the best service possible to their clients,” says Webroot SVP of Product Strategy and Technology Alliances Chad Bacher.”

So as a growing number of MSPs begin to offer security awareness training as a part of their bundled services, and more small and medium-sized businesses are convinced of its necessity, choosing a product that’s easy to implement and manage becomes key.

Otherwise, the tool that could save a business from a breach becomes just another cob-webbed weight bench waiting for its day.


This article was provided by our service partner : webroot.com

cybersecurity

7 Critical, and Often Overlooked, Ways to Improve Your Cybersecurity

What you don’t know can, and will, hurt you. Cybersecurity is now at the forefront of business IT needs. If you ignore it, it won’t go away, and even worse, your customers will look elsewhere to get the services they need if you’re not providing them. It’s time to face the music. I recently sat down to chat with Chris Loehr, Executive Vice President of Solis Security, who specializes in cybersecurity incident response.

Chris has experience conducting forensic work on cyberattacks. He works with MSPs day in and day out and sees first-hand the mistakes commonly made all the time. Here are the tips he shared with us on how to wise up about cybersecurity:

Know Your Power

Your tools, specifically your remote monitoring and management (RMM) tool, are extremely powerful. While it can be used for the purpose it was intended, allowing you to work on multiple machines at the same time, it can also be used maliciously to attack several companies at once. This makes MSPs an ideal target for attackers to gain access to an entire database in a relatively short amount of time vs. attacking companies individually. And unfortunately, in some cases, businesses never recover. You need to ensure that your RMM is secure.

Don’t Blindly Trust Your Providers

You should hold yourself responsible and perform due diligence on your key vendors/service providers. Your customers trust you. The vendors you work with are an extension of you and the services you provide. Ensuring that your vendors are doing the right things makes it easier for you to also do right by your customers. You need to educate your customers on what threats could impact them, what you do or do not cover, and provide the appropriate solutions. In doing so, you can be the trusted service provider they believe you are. And in the long run, this level of earned trust translates directly to customer retention.

Invest the Time to Truly Know Your Customers

When disaster strikes should not be the time that you’re learning about your customers and their operations. You need to know ahead of time what the critical applications/files are that need to be backed up. They might not be the obvious applications. Too often after disaster strikes, you find out you didn’t back up something essential to the customers’ business because you didn’t know about it or its importance. A business impact assessment (BIA) should be performed annually for each monthly recurring revenue (MRR) customer.

Give Your Best Customers Some Love

When disaster strikes, the best customers usually will be the most upset and most willing to pursue legal action. Even though everything appears to be going great, you don’t know what may be happening behind the scenes. Having crucial conversations with decision makers is key to your ongoing success. Ensure these conversations include topics around cybersecurity to help protect them, as well as yourself.

Don’t Be Cybersecurity Insurance Ignorant

Cybersecurity coverage is not the same as an auto insurance or health insurance policy. Filing a claim does not make your premiums go up. Be especially careful when deciding what coverages to waive. To get lower premiums, companies sometimes waive cyberextortion coverage. However, this type of coverage pays for a ransom, should you be in a situation to require one. Even though you might have enough money in the bank to pay it, keep in mind that you are still responsible for operational expenses as well (like payroll).

Doing a risk assessment is helpful to understand where you and your customers stand and in the future could also become a tool for the insurance industry to help underwrite policies.

Realize That Your Contracts Aren’t a Magic Shield

This is the biggest weakness of many MSPs. Anyone can sue you regardless of your contract. You need to know when certain scenarios will negate your liability limitations. Often, MSPs rely on only one attorney to assist in creating their contracts. It’s always best to have a second option. We highly advise getting a litigation attorney to look at your contracts. Also, take into consideration different state laws if you operate in more than one state and how that impacts your contracts.

Prepare for a Disaster

As the saying goes, “If you fail to plan, you’re planning to fail.” Not planning for a disaster could quite literally put you out of business or set you back a couple of years. Your backup solution is the ultimate piece that will save your business. It has to be more than rock solid. Test it and test it again. Backing up data is the first step but being able to restore from the back up is the true measure of success. The worst-case scenario is to have to tell your customer that you lost all the files that were previously backed up. A one size fits all backup solution might not work for each customer.


This article was provided by our service partner : connectwise.com