How MSPs Can Reduce Their Security Risk

While technology improves our lives in so many ways, it certainly isn’t free from drawbacks. And one of the biggest drawbacks is the risk of cyberattacks—a risk that’s escalating every day.

To reduce the increasing risk of cyberattacks—to your customers and your MSP business—it’s essential to put protocols in place to strengthen your internal security (we often refer to this as ‘getting your house in order’) and protect your clients. The truth is, your customers automatically assume that security is integrated into the price of their contract. That means you need to educate them on the subject, or risk falling short of their (potentially unrealistic) expectations.

What’s more, this is a prime opportunity to offer additional services—and increase revenue.

“You don’t want to deliver security services and not have the client invest in those services,” explains George Mach, Founder and CEO of Apex IT Group. “It would impact your MSP in a negative way.”

In our Path to Success Security Spotlight, I sat down with George Mach to discuss how you can define, identify, and reduce your level of risk, and boost revenue as a result. Here are just a few of our tips.

Understand Your Risk

The first step to reducing risk and providing Security-as-a-Service is understanding the current state of your MSP’s security.

“If you don’t know your own gaps or have good security hygiene in your own MSP, it’s really hard to deliver world-class security services to your client,” Mach says.

As an MSP, you have access to a wealth of sensitive information about your clients, including their passwords, addresses, and names. As such, it’s crucial that your MSP is fully protected. Even the smallest data breach could cause your clients to lose trust in you—damaging your reputation and costing you their business.

Trust, Train & Protect Your House

To protect your MSP (and by extension, your clients), Mach recommends following three simple steps.

First, make sure that you only hire trustworthy people. Of course, it isn’t always easy to spot a wolf in sheep’s clothing, but there are a few measures you can take to safeguard your organization against harmful presences. During the hiring process, this could include conducting a background check and verifying a candidate’s education and employment history. You can also consider creating new onboarding policies and asking employees to sign agreements that go on file, holding them accountable to specific standards.

Secondly, it’s important to train everyone at your organization about how to detect potential scammers—including staff in non-technical positions. As part of this training, you may also want to conduct a security skills assessment and record that it has taken place. That way, should the worst happen and a client decides to sue following a security breach, you can prove the measures your company took to try and prevent it—helping protect your reputation.

“The goal is to be in a defensible position if something were to happen,” Mach says.

Thirdly, it’s essential to enforce technical, physical, and administrative controls at your organization. Firewalls and endpoint protection are a must. Investing in swipe cards or biometric scanners can also help you strengthen your protection by helping you identify every person who enters your building. And to reduce your legal risk, don’t overlook the importance of nondisclosure agreements (NDAs) and business associate agreements (BAAs).

Follow the Framework

Once you’ve increased security at your MSP, you can start thinking about how to offer Security-as-a-Service. Following the protocols outlined in the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework is a good place to start. These protocols are: identify, protect, detect, respond, and recover.

By following these protocols, your company can turn secure protection into a competitive advantage. But that’s only possible if you communicate it properly to your clients.

Throughout conversations with your clients, it’s crucial to gain an understanding of their security priorities and the metrics they use to determine their success. Once you’ve identified these factors, you can establish risk thresholds that are closely aligned with your client’s risk tolerance.

Benchmarking your clients’ level of risk against industry standards and using a weighted scoring system to rank it from high to low can make it easier to communicate the value of your services to them—and the impact you’ll have on their business.

Measure Risk Reduction—Then Market It

You can use two approaches to measure risk reduction.

The quantitative approach, which is more technical, considers a server’s asset value, its exposure factor (which takes into account how often the server is left unattended and whether that server is in a protected environment), and the loss expectancy, which is related to the rate of occurrence of various risks. Taking all these factors into account, you can more accurately price your services—and your clients can make a more informed decision about whether to live with the risk or do something to mitigate it.

The qualitative approach is less complex. It uses available data to calculate the likelihood of a risk. You can then suggest countermeasures to ensure protection.

Whichever approach you choose, explaining your findings and suggested solutions in layman’s terms and backing up your claims with evidence helps to build trust with your clients.

It’s this trust that will persuade clients to invest in your security service—and remain satisfied customers for years to come.


This article was provided by our service partner : Connectwise

DNS Security – Your New Secret Weapon in The Fight Against Cybercrime

It’s time to use the internet to your security advantage. Did you know more than 91% of malware uses DNS to gain command and control, exfiltrate data, or redirect web traffic?

But when internet requests are resolved by a recursive DNS service, they become the perfect place to check for and block malicious or inappropriate domains and IPs. DNS is one of the most valuable sources of data within an organization. It should be mined regularly and cross-referenced against threat intelligence. It’s easier to do than you might think. Security teams that are not monitoring DNS for indications of compromise are missing an important opportunity.

Don’t believe us? New analysis shows widespread DNS protection could save organizations as much as $200 billion in losses every year. Check out the full report  The Economic Value of DNS Security,” recently published by the Global Cyber Alliance (GCA). According to their findings, DNS firewalls could prevent between $19 billion and $37 billion in annual losses in the US and between $150 billion and $200 billion in losses globally. That’s a lot of bang for your buck. If organizations around the globe were to make this simple addition to their security stack, the savings could add up into billions of dollars.  Translation: an easy way to prevent one-third of total losses due to cybercrime.

About Cisco Umbrella

Cisco Umbrella uses the internet’s infrastructure to stop threats over all ports and protocols before it reaches your endpoints or network. Using statistical and machine learning models to uncover both known and emerging threats, Umbrella proactively blocks connections to malicious destinations at the DNS and IP layers. And because DNS is a protocol used by all devices that connect to the internet, you simply point your DNS to the Umbrella global network, and any device that joins your network is protected. So when your users roam, your network stays secure.


This article was provided by our service partner : Cisco Umbrella

Offering Security Services: Should You Build, Buy, or Partner?

One size DOES NOT fit all.

Let’s consider the ‘build, buy, partner’ framework for security services, which offers three very different approaches you could take. There is no absolute right or wrong way, only what is best for your business. Explore the pros and cons of each so you can determine the right way for you.

Building Security

Utilizing this approach means you create/develop the solution with the resources you own, control, or contract to.

Strategy of Things gives us deeper insight into what is required to pull this off.

When to consider this approach:

  • You have the requisite skill sets and resources to do it
  • You can offer security faster, cheaper, and at lower risk
  • This is a strategic competence you own or want to own
  • There is strategic knowledge or critical intellectual property to protect
  • You are fully committed throughout the company

Pros

  • Most product control
  • Most profit opportunity

Cons

  • Longest time to market
  • High development cost

The Challenge: Hiring security resources to monitor 24/7 (emphasis on 24/7)

According to PayScale, the average salary for a cybersecurity analyst is $75,924. How much revenue would you need to earn to bring on just one analyst? Security talent is a hot commodity. Even if you can hire them, keeping them on will be a challenge when you’re fighting bigger businesses or one that specializes in cybersecurity who will pay more and offer more benefits.

Buying Security

This approach could also be referred to as ‘acquiring’ where you are seeking to acquire another company that specializes in a particular area (for example cybersecurity or physical security) to get the missing skill set you’re looking for under your umbrella.

Let’s take a look at the requirements needed for this approach courtesy of Strategy of Things.

When to consider this approach:

  • You don’t have the skills or resources to build, maintain, and support security
  • There is some or all of a solution in the marketplace and no need to reinvent the wheel
  • Someone can do it faster, better, and cheaper
  • You want to focus limited resources in other areas that make more sense
  • Time is critical, and you want to get to market faster
  • There is a solution in the marketplace that gives you mostly what you want

Pros

  • Shortened time to market
  • Acquiring skill sets

Cons

  • Can be costly to acquire
  • Integration takes time

The Challenge: The MSP M&A market is hot, AND it’s a seller’s market
Jim Schleckser, CEO, Inc. CEO Project and author of Great CEOs Are Lazy states in an article on Inc.com, “Many acquisitions fail to live up to their financial or performance expectations because the acquiring company hasn’t done its proper homework.” Take the time to do some serious research on how to take advantage of a seller’s market and find the expertise you need for M&A success. We have a couple of webinars to help you get started:

Bonus for ConnectWise partners: We’re fully invested in helping you throughout the M&A process every step of the way, including technical assistance post-acquisition from our M&A specialist.

Partnering for Security

Strategy of Things gives us insight into this approach. Cybersecurity is a specialized field that many vendors cannot address on their own and must buy or license for their solution.

The company allies itself with a complementary solution or service provider to integrate and offer a joint solution. This option enables both companies to enter a market neither can alone, access to specialized knowledge neither has, and a faster time to market.

Companies consider this approach when neither party has the full offering to get to market on their own.

Pros

  • Shortest time to market
  • Each party brings specialized knowledge or capabilities, including technology, market access, and credibility
  • It lowers the cost, time, and risk to pursue new opportunities
  • Conserves resources
  • Opportunity to learn the skill set before building something of your own

Cons

  • Least control
  • Integration cost
  • Shared gross margins

Many vendors today offer a lot more flexibility today to make partnering an easy choice. A great example is Perch Security threat detection and response.

No matter where you are in your security journey, Perch enables you to choose your level of involvement:

  • Fully managed by Perch SOC

If you’re more of a ‘hands-off, I trust you to do your thing’ type of person/company, then you have the freedom to sit back and relax while the Perch team does their thing. They’ll only involve you when absolutely necessary and equip you with the tools to look good in front of the customer while they do all the heavy lifting.

  • Mostly managed by Perch SOC, your team reviewing or jumping in on specific issues

If you want to be aware on a high level of what’s going on in the world of threat detection but not to the level of fully geeking out, then this level of involvement is right up your alley and 100% possible with the Perch team. Get updates on the things you care about without being inundated with the things you don’t.

  • Fully manage alerts yourself

If you want to geek out on threat reports side by side with the Perch flock, you’re more than welcome to. If you have a person on your team that’s interested in security but not able to dedicate 100% of their time to it, feel free to carve out a portion of their daily responsibilities to working hand-in-hand with the Perch team. Should things change along the way, and you need more or less involvement, you’re free to leverage the Perch team as needed.

Conclusion

Security isn’t solved by one single tool. It’s an ongoing journey that requires continuous assessment and refinement. Everyone has to start somewhere, but keep in mind that the starting line for you might look different than the starting line for someone else, and that’s okay. Carefully review the options at your disposal and determine which path is best for you.

“The journey of a thousand miles begins with a single step.” Lao Tzu


This article was provided by our service partner Connectwise

vcenter server

Decoding the vCenter Server Lifecycle: Update and Versioning Explained

Have you ever wondered what the difference is between a vCenter Server update and a patch? Or between an upgrade and a migration? Why don’t some vCenter Server versions align? Keep reading for the answers!

Version Numbering

The first thing you should understand is vCenter Server versioning. When reviewing your vCenter Server version’s you may see many different references to versions or builds.

One of the first places you will notice a version identifier, is in our release notes. Here you will see the product version listed as vCenter Server 6.7 Update 2a and the build number listed as 13643870.


Once you have upgraded or deployed your vCenter Server you will see version identifiers such as 6.7.0.31000 listed in the VMware Appliance Management Interface (VAMI). You will also see a build number, such as 13643870.

If you review the version information within your vSphere Client you will see the version listed as 6.7.0 and the build as 13639324.

The reason you will see differing versions among these places are because the release notes show the vCenter Server build and full release name, in the VAMI it will show the vCenter Server Appliance version in addition to the build and in the vSphere Client it will show the vCenter Server version and the build of the vSphere Client.

KB2143838 is a great resource that will explain the breakdown of versioning and builds for all vCenter Server versions.

Now that we have  explained the way versioning works, let’s jump into the different scenarios where VMware will increment a version.

vCenter Server Updates and Patches

What is a vCenter Server Update and how does It differ from a patch?

A vCenter Server Update is one that applies to the vCenter Server application. An update can include new features, bug fixes or updates for additional functionality. vCenter Server updates will have a dedicated set of release notes and will be hosted on the my.vmware.com download portal.

A vCenter Server patch is more much streamlined as these are associated with operating system and security level updates. There are no application related changes, and these can target Photon OS, the Postgres DB, Java versions and any other supporting Linux libraries on the vCenter Server Appliance.

A vCenter Server patch also has no dedicated release notes as these are part of the rolled up VMware vCenter Server Appliance Photon OS Security Patches. Patches are also not stored on the my.vmware.com download portal but on the alternate VMware Patch Portal. It is also very important to note as listed in the release notes, these should not be used for any deployment or upgrade. The only reason the vCenter Server ISO’s are hosted on the VMware Patch Portal is to be used to restore your vCenter Server Appliance if using the built-in File-Based Backup. Patches can also only be applied within one and the same update release. So for example if you are currently on 6.7 Update 1 you would not be able to patch directly to 6.7 Update 2b , you would first update to 6.7 Update  2a and then patch to 6.7 Update 2b.

Now that we have explained the differences between a vCenter Server update and patch we can review the differences between an upgrade and migration.

vCenter Server Upgrades and Migrations

In its simplest form a vCenter Server Upgrade is defined as doing a major version change between vCenter Server Appliance versions. If you are running the vCenter Server Appliance 6.5  in your environment and move to vCenter Server Appliance 6.7 this would be considered an upgrade.

A vCenter Server migration is defined as doing a major version change between vCenter Server for Windows and the vCenter Server Appliance. If you are running vCenter Server for Windows 6.5 and move to the vCenter Server Appliance 6.7 this would be considered a migration. It is not supported to do a migration between the same major version as it consists of both a change of platform and an upgrade together.

In vSphere 6.5 and 6.7 an upgrade or migration of the vCenter Server is not completed in place. During the upgrade process a brand new appliance of the newer version is deployed, and based on the settings defined the data is exported from the old version and imported into the new one retaining the same FQDN, IP, Certs and UUIDs.

A back-in-time upgrade restriction is when you are unable to upgrade from one 6.5 release to another 6.7 release. For example, Upgrade from vSphere 6.5 Update 2d to vSphere 6.7 Update 1 is not supported due to the back-in-time nature of vSphere 6.7 Update 1. vSphere 6.5 Update 2d contains code and security fixes that are not in vSphere 6.7 Update 1 and might cause regression. When performing vCenter Server upgrades and migrations it’s also very important to pay attention to unsupported upgrade paths which are normally restricted due to being a back-in-time upgrade. It is also important to note that just because two releases might have the same release date, does not mean that they will be compatible. The best resource to review supported upgrade paths will be in the vCenter Server Release Notes section titled Upgrade Notes for this Release.

Resource Wrap-Up

 Conclusion

Versioning of a complex product can be difficult, but hopefully you now have a better understanding of what these numbers mean. If you have any questions feel free to post a comment below or check out any of the resources linked.


This article was provided by our service partner : Vmware

How to create a Failover Cluster in Windows Server 2019

This article gives a short overview of how to create a Microsoft Windows Failover Cluster (WFC) with Windows Server 2019 or 2016. The result will be a two-node cluster with one shared disk and a cluster compute resource (computer object in Active Directory).

Windows server 2019 failover cluster

Preparation

It does not matter whether you use physical or virtual machines, just make sure your technology is suitable for Windows clusters. Before you start, make sure you meet the following prerequisites:

Two Windows 2019 machines with the latest updates installed. The machines have at least two network interfaces: one for production traffic, one for cluster traffic. In my example, there are three network interfaces (one additional for iSCSI traffic). I prefer static IP addresses, but you can also use DHCP.

failover cluster 02

Join both servers to your Microsoft Active Directory domain and make sure that both servers see the shared storage device available in disk management. Don’t bring the disk online yet.

The next step before we can really start is to add the “Failover clustering” feature (Server Manager > add roles and features).

Reboot your server if required. As an alternative, you can also use the following PowerShell command:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

After a successful installation, the Failover Cluster Manager appears in the start menu in the Windows Administrative Tools.

After you installed the Failover-Clustering feature, you can bring the shared disk online and format it on one of the servers. Don’t change anything on the second server. On the second server, the disk stays offline.

After a refresh of the disk management, you can see something similar to this:

Server 1 Disk Management (disk status online)


Server 2 Disk Management (disk status offline)

Failover Cluster readiness check

Before we create the cluster, we need to make sure that everything is set up properly. Start the Failover Cluster Manager from the start menu and scroll down to the management section and click Validate Configuration.

Select the two servers for validation.

Run all tests. There is also a description of which solutions Microsoft supports.

After you made sure that every applicable test passed with the status “successful,” you can create the cluster by using the checkbox Create the cluster now using the validated nodes, or you can do that later. If you have errors or warnings, you can use the detailed report by clicking on View Report.

Create the cluster

If you choose to create the cluster by clicking on Create Cluster in the Failover Cluster Manager, you will be prompted again to select the cluster nodes. If you use the Create the cluster now using the validated nodes checkbox from the cluster validation wizard, then you will skip that step. The next relevant step is to create the Access Point for Administering the Cluster. This will be the virtual object that clients will communicate with later. It is a computer object in Active Directory.

The wizard asks for the Cluster Name and IP address configuration.

As a last step, confirm everything and wait for the cluster to be created.

The wizard will add the shared disk automatically to the cluster per default. If you did not configure it yet, then it is also possible afterwards.

As a result, you can see a new Active Directory computer object named WFC2019.

You can ping the new computer to check whether it is online (if you allow ping on the Windows firewall).

As an alternative, you can create the cluster also with PowerShell. The following command will also add all eligible storage automatically:

New-Cluster -Name WFC2019 -Node SRV2019-WFC1, SRV2019-WFC2 -StaticAddress 172.21.237.32

You can see the result in the Failover Cluster Manager in the Nodes and Storage > Disks sections.

The picture shows that the disk is currently used as a quorum. As we want to use that disk for data, we need to configure the quorum manually. From the cluster context menu, choose More Actions > Configure Cluster Quorum Settings.

Here, we want to select the quorum witness manually.

Currently, the cluster is using the disk configured earlier as a disk witness. Alternative options are the file share witness or an Azure storage account as witness. We will use the file share witness in this example. There is a step-by-step how-to on the Microsoft website for the cloud witness. I always recommend configuring a quorum witness for proper operations. So, the last option is not really an option for production.

Just point to the path and finish the wizard.

After that, the shared disk is available for use for data.

Congratulations, you have set up a Microsoft failover cluster with one shared disk.

Next steps and backup

One of the next steps would be to add a role to the cluster, which is out of scope of this article. As soon as the cluster contains data, it is also time to think about backing up the cluster. Veeam Agent for Microsoft Windows can back up Windows failover clusters with shared disks. We also recommend doing backups of the “entire system” of the cluster. This also backs up the operating systems of the cluster members. This helps to speed up restore of a failed cluster node, as you don’t need to search for drivers, etc. in case of a restore.


This article was provided by our service partner : Veeam

Security

Is Being Secure and Being Compliant the Same Thing?

Often times, when I ask a partner if they’re offering security to their SMB customers, their answer revolves around consulting on compliance. Verticals like healthcare, financial, government, and retail are low-hanging fruit for security revenue opportunities because compliance is a requirement of being in business.

However, being secure and being compliant are NOT the same. Did you know that you can be compliant without being fully secure? While being compliant increases data protection and keeps organizations from paying hefty fines, it’s simply not enough. If that’s what you’re relying on to keep you and your customers safe, you’d be sorely mistaken.

Being compliant is like following a strict nutritionist-approved diet to stay healthy.

While that’s a good practice, and it will certainly help, it’s also very important that you know your family’s medical history and how that could impact your health in the future (your risks) so you can make necessary, and maybe even lifesaving decisions. If you ignored your risks and only stuck to a good diet, you might be blindsided at a doctor’s appointment to learn that you have a certain hereditary disease.

“If we had only caught this sooner…”

Many MSPs are approaching security when an incident occurs, while others are being proactive to meet their customer’s compliance requirements. They’re not thinking of the broader picture of risk. You need to fully understand your risks to ensure that you and your customers are secure. Don’t wait until disaster strikes.

Let’s dive into the differences between the two phrases.

Being Compliant

What does it mean to be compliant? Is that enough?

Regulatory compliance describes the goal that organizations aspire to achieve in their efforts to ensure they are aware of and take steps to comply with relevant laws, policies, and regulations, such as PCI, HIPAA, GDPR, and DFARS.

We’ve heard of several companies making news headlines regarding security breaches. The court will determine if there was negligence in adhering to regulations and taking the proper legally required steps to protect their data properly. If the company is found not to be compliant, there are heavy financial consequences.

How much are we talking? Yahoo’s loss of 3 billion user accounts cost them an estimated $350 million off their sales price.

Needless to say, there’s a big incentive for companies to cover the basics when it comes to security. However, if you stop at just being compliant, you’re essentially only doing the bare minimum, whatever is legally required.

It’s a starting point.

Being Secure

The next step is to ensure security. Go above and beyond.

According to Cisco, “Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes.”

When hackers attack your business, it’s not just your business that’s at stake. By getting access to your database, hackers gain access to all your customers. So, we could consider ensuring cybersecurity as a social responsibility (not just a legal one).

We believe in doing business this way, going above and beyond, and have adopted the NIST Cybersecurity Framework. It consists of standards, guidelines, and best practices to manage cybersecurity-related risks as an ongoing practice.

As leaders in the IT industry, we’re all constantly looking to others who are doing things well and subscribe to best practices in several other areas of business. Cybersecurity is no different.

The framework encourages identifying your risks proactively, so you can take the necessary steps in reducing and managing your risks.

How to Assess Risks

We know what you’re thinking, “Easier said than done, though, right? Just another thing to add to my to-do list.”

This process doesn’t have to be overwhelming. Knowing where to start is half the battle. Smart security offerings start with a risk assessment that allows you to proactively identify security risks across your entire business as well as your customers, not just on their network. The result is an easy-to-understand, customized risk report showing your customer their most critical risks and recommendations for how to remediate those risks.

Next Steps

The bottom line: be compliant AND secure. Start by understanding your legal compliance responsibilities to protect yourself and your customers during a disaster. Then, take it a step further—assess and fully understand your security risks and develop a plan to reduce your risks.


This article was provided by our service partner : Connectwise

vmware vsphere

10 Things To Know About vSphere Certificate Management

With security and compliance on the minds of IT staff everywhere, vSphere certificate management is a huge topic. Decisions made can seriously affect the effort it takes to support a vSphere deployment, and often create vigorous discussions between CISO and information security staff, virtualization admins, and enterprise PKI/certificate authority admins. Here are ten things that organizations should consider when choosing a path forward.

1. Certificates are about encryption and trust

Certificates are based on public key cryptography, a technique developed by mathematicians in the 1970s, both in the USA and Britain. These techniques allow someone to create two mathematical “keys,” public and private. You can share the public key with another person, who can then use it to encrypt a message that can only be read by the person with the private key.

When we think about certificates we often think of the little padlock icon in our browser, next to the URL. Green and locked means safe, and red with an ‘X’ and a big “your connection is not private” warning means we’re not safe, right? Unfortunately, it’s a lot more complicated than that. A lot of things need to be true for that icon to turn green.

When we’re using HTTPS the communications between our web browser and a server are sent across a connection protected with Transport Layer Security (TLS). TLS is the successor to Secure Sockets Layer, or SSL, but we often refer to them interchangeably. TLS has four versions now:

  • Version 1.0 has vulnerabilities, is insecure, and shouldn’t be used anymore.
  • Version 1.1 doesn’t have the vulnerabilities as 1.0 but it uses MD5 and SHA-1 algorithms which are both insecure.
  • Version 1.2 adds AES cryptographic ciphers that are faster, removes some insecure ciphers, and switches to SHA-256. It is the current standard.
  • Version 1.3 removes weak ciphers and adds features that increase the speed of connections. It is the upcoming standard.

Using TLS means that your connection is encrypted, even if the certificates are self-signed and generate warnings. Signing a certificate means that someone vouches for that certificate, in much the same way as a trusted friend would introduce someone new to you. A self-signed certificate simply means that it’s vouching for itself, not unlike a random person on the street approaching you and telling you that they are trustworthy. Are they? Maybe, but maybe not. You don’t know without additional information.

To get the green lock icon you need to trust the certificate by trusting who signed it. This is where a Certificate Authority (CA) comes in. Certificate Authorities are usually specially selected and subject to rigorous security protocols, because they’re trusted implicitly by web browsers. Having a CA sign a certificate means you inherit the trust from the CA. The browser lock turns green and everything seems secure.

Having a third-party CA sign certificates can be expensive and time-consuming, especially if you need a lot of them (and nowadays you do). As a result, many enterprises create their own CAs, often using the Microsoft Active Directory Certificate Services, and teach their web browsers and computers to trust certificates signed by that CA by importing the “root CA certificates” into the operating systems.

2. vSphere uses certificates extensively

All communications inside vSphere are protected with TLS. These are mainly:

  • ESXi certificates, issued to the management interfaces on all the hosts.
  • “Machine” SSL certificates used to protect the human-facing components, like the web-based vSphere Client and the SSO login pages on the Platform Service Controllers (PSCs).
  • “Solution” user certificates used to protect the communications of other products, like vRealize Operations Manager, vSphere Replication, and so on.

The vSphere documentation has a full list. The important point here is that in a fully-deployed cluster the number of certificates can easily reach into the hundreds.

3. vSphere has a built-in certificate authority

Managing hundreds of certificates can be quite a daunting task, so VMware created the VMware Certificate Authority (VMCA). It is a supported and trusted component of vSphere that runs on a PSC or on the vCenter VCSA in embedded mode. Its job is to automate the management of certificates that are used inside a vSphere deployment. For example, when a new host is attached to vCenter it asks you to verify the thumbprint of the host ESXi certificate, and once you confirm it’s correct the VMCA will automatically replace the certificates with ones issued by the VMCA itself. A similar thing happens when additional software, like vRealize Operations Manager or VMware AppDefense is installed.

The VMCA is part of the vCenter infrastructure and is trusted in the same way vCenter is. It’s patched when you patch your PSCs and VCSAs. It is sometimes criticized as not being a full-fledged CA but it is just-enough-CA, purpose-built to serve a vSphere deployment securely, safely, and in an automated way to make it easy to be secure.

4. There are four main ways to manage certificates

  1. First, you can just use a self-signed CA certificate. The VMCA is fully-functional once vCenter is installed and automatically creates root certificates to use for signing ESXi, machine, and solution certificates. You can download the root certificates from the main vCenter web page and import them into your operating systems to establish trust and turn the browser lock icon green for both vCenter and ESXi. This is the easiest solution but it requires you to accept a self-signed CA root certificate. Remember, though, that we trust vCenter, so we trust the VMCA.
  2. Second, you can make the VMCA an intermediate, or subordinate, CA. We do not recommend this (see below).
  3. Third, you can disable the VMCA and use custom certificates for everything. To do this you can ask the certificate-manager tool to generate Certificate Signing Requests (CSRs) for everything. You take those to a third-party CA, have them signed, and then install them all manually. This is time-consuming and error-prone.
  4. Fourth, you can use “hybrid” mode to replace the machine certificates (the human-facing certificates for vCenter) with custom certificates, and let the VMCA manage everything else with its self-signed CA root certificates. All users of vCenter would then see valid, trusted certificates. If the virtualization infrastructure admin team desires they can import the CA root certificates to just their workstations and then they’ll have green lock icons for ESXi, too, as well as warnings if there is an untrusted certificate. This is the recommended solution for nearly all customers because it balances the desire for vCenter security with the realities of automation and management.

5. Enterprise CAs are self-signed, too

“But wait,” you might be thinking, “we are trying to get rid of self-signed certificates, and you’re advocating their use.” True, but think about it this way: enterprise CAs are self-signed, too, and you have decided to trust them. Now you simply have two CAs, and while that might seem like a problem it really means that a separation exists between the operators of the enterprise CA and the virtualization admin team, for security, organizational politics, staff workload management, and even troubleshooting. Because we trust vCenter, as the core of our virtualization management, we also implicitly trust the VMCA.

6. Don’t create an intermediate CA

You can create an intermediate CA, also known as a subordinate CA, by issuing the VMCA a root CA certificate capable of signing certificates on behalf of the enterprise CA and using the Certificate Manager to import it. While this has applications, it is generally regarded as unsafe because anybody with access to that CA root key pair can now issue certificates as the enterprise CA. We recommend maintaining the security & trust separation between the enterprise CA and the VMCA and not using the intermediate CA functionality.

7. You can change the information on the self-signed CA root certificate

Using the Certificate Manager utility you can generate new VMCA root CA certificates with your own organizational information in them, and the tool will automate the reissue and replacement of all the certificates. This is a popular option with the Hybrid mode, as it makes the self-signed certificates customized and easy to identify. You can also change the expiration dates if you dislike the defaults.

8. Test, test, test!

The only way to truly be comfortable with these types of changes is to test them first. The best way to test is with a nested vSphere environment, where you install a test VCSA as well as ESXi inside a VM. This is an incredible way to test vSphere, especially if you shut it down and take a snapshot of it. Then, no matter what you do, you can restore the test environment to a known good state. See the links at the end for more information on nested ESXi.

Another interesting option is using the VMware Hands-on Labs to experiment with this. Not only are the labs a great way to learn about VMware products year-round, they’re also great for trying unscripted things out in a low-risk way. Try the new vSphere 6.7 Lightning Lab!

9. Make backups

When the time comes to do this for real make sure you have a good file-based backup of your vCenter and PSCs using the VAMI interface. Additionally, the Certificate Manager utility backs up the old certificates, so you can restore them if needed (only one set, though, so think that through). This way you can restore them if things go wrong. If things do not go as planned or tested know that these operations are fully supported by VMware Global Support Services, who can walk you through resolving any problem you might encounter.

10. Know why you’re doing this

In the end the choice of how you manage vSphere certificates depends on what your goals are.

  • Do you want that green lock icon?
  • Does everybody need the green lock icon for ESXi, or just the virtualization admin team?
  • Do you want to get rid of self-signed certificates, or are you more interested in establishing trust?
  • Why do you trust vCenter as the core of your infrastructure but not a subcomponent of vCenter?
  • What is the difference in trust between the enterprise self-signed CA root and the VMCA self-signed CA root?
  • Is this about compliance, and does the compliance framework truly require custom CA certificates?
  • What is the cost, in staff time and opportunity cost, of ignoring the automated certificate solution in favor of manual replacements?
  • Does the solution decrease or increase risk, and why?

Whatever you decide know that thousands of organizations across the world have asked the same questions, and out of the discussions have come good understandings of certificates & trust as well as better relations between security and virtualization admin teams.


This article was provided by our service partner : Vmware

veeam

Veeam : Set up vSphere RBAC for self-service backup portal

Wouldn’t it be great to empower VMware vSphere users to take control of their backups and restores with a self-service portal? The good news is you can as of Veeam Backup & Replication 9.5 Update 4. This feature is great because it eliminates operational overhead and allows users to get exactly what they want when they want it. It is a perfect augmentation for any development team taking advantage of VMware vSphere virtual machines.

Introducing vSphere role-based access control (RBAC) for self-service

vSphere RBAC allows backup administrators to provide granular access to vSphere users using the vSphere permissions already in place. If a user does not have permissions to virtual machines in vCenter, they will not be able to access them via the Self-Service Backup Portal.

Additionally, to make things even simpler for vSphere users, they can create backup jobs for their VMs based on pre-created job templates. They will not have to deal with advanced settings they are not familiar with (This is a really big deal by the way).vSphere users can then monitor and control the backup jobs they have created using the Enterprise Manager UI, and restore their backups as needed.

Setting up vSphere RBAC for self-service

Setting up vSphere RBAC for self-service could not be easier. In the Enterprise Manager configuration screen, a Veeam administrator simply has to navigate to “Configuration – Self-service.” Then, he should add the vSphere user’s account, specify a backup repository, set a quota, and select the delegation method. These permissions can also be applied at the group level for enhanced ease of administration too.

Besides VMware vCenter Roles, vSphere privileges or vSphere tags can be used as the delegation method. vSphere tags is one of my favorite methods to use since tags can be applied to either reach a very broad or very granular set of permissions. The ability to use vSphere tags is especially helpful for new VMware vSphere deployments, since it provides quick, easy, and secure access to virtual machine users for this case.

For example, I could set vSphere tags at a vSphere cluster level if I had a development cluster, or I could set vSphere tags on a subset of virtual machines using a tag such as “KryptonSOAR Development” to only provide access to development virtual machines.

After setting the Delegation Mode, the user account can be edited to select the vSphere tag, vCenter server role, or VM privilege. From the Edit screen, the repository and quota can also be changed at any time if required.

Using RBAC for VMware vSphere

After this very simple configuration, vSphere users simply need to log into the Self-Service Backup Portal to begin protecting and recovering their virtual machines. The URL can be shared across the entire organization: https://<EnterpriseManagerServer>:9443/backup, thus giving everyone a very convenient way of managing their workloads. Job creation and viewing in the Self-Service Backup Portal is extremely user friendly, even for those who have never backed up a virtual machine before! When creating a new backup job, users will only see the virtual machines they have access to, which makes the solution more secure and less confusing.

There is even a helpful dashboard, so users can monitor their backup jobs and the amount of backup storage they are consuming.

Enabling vSphere users to back up and restore virtual machines empowers them in new ways, especially when it comes to DevOps and rapid development cycles. Best of all, Veeam’s self-service implementation leverages the VMware vSphere permissions framework organizations already have in place, reducing operational complexity for everyone involved.

When it comes to VM recovery, there are also many self-service options available. Users can independently navigate to “VMs” tab to perform full VM restores. Again, the process is very easy as the user should decide whether to preserve the original VM if Veeam detects it or to overwrite its data, select the desired restore point, and specify whether it should be powered on after this procedure. Three simple actions and the data is on its way.

In addition to that, the portal makes file- and application-level recovery very convenient too. There are quite a few scenarios available and what’s really great about it is that users can navigate into the file system tree via the file explorer. They can utilize a search engine with advanced filters for both indexed and non-indexed guest OS file systems. Under the hood, Veeam is going to decide how exactly the operation should be handled but the user won’t even know about it. There is no chance the sought-for document can slip here. The cherry on top is that Veeam provides recovery of application-aware SQL and Oracle backups, thus making your DBAs happy without giving them too many rights for the virtual environments.


This article was provided by our service partner : Veeam

managed services

Managed Services 101: Where MSPs Are Now, and Where They’re Going

Managed services are becoming an increasingly integral part of the business IT ecosystem. With technology advancing at a rapid pace, many companies find it cheaper and more effective to outsource some or all of their IT processes and functions to an expert provider, known as a managed service provider (MSP).

Unlike traditional on-demand IT outsourcing, MSPs proactively support a company’s IT needs. And with the IT demands of businesses becoming ever more complex, reliance on MSPs is likely to increase exponentially over the next few years.

What Is a Managed Service Provider?

An MSP manages a company’s IT infrastructure on a subscription-based model. MSPs offer continual support that can include the setup, installation, and configuration of a company’s IT assets.

Managed services can supplement a company’s internal IT department and provide services that may not be available in-house. And since the MSP is continuously supporting the company’s IT infrastructure and systems, rather than simply stepping in from time to time to put out a fire, these services can provide a level of peace of mind that other models just can’t match.

What’s the Difference Between Managed Services and the Break/Fix Model?

Unlike on-demand outsourced IT services, managed services play an ongoing and harmonious role in the running of an organization.

Due to the rapidly changing nature of the digital landscape, it’s no longer sustainable to fix problems after the damage is done. Yet the break/fix model is still a common way of dealing with IT-related problems. It’s like waiting to repair a minor leak until after the pipe has burst.

On-demand providers are usually brought in to perform a specific service (like fixing a broken server), and they bill the customer for the time and materials it takes to provide that service. MSPs, on the other hand, charge a recurring fee to provide an ongoing service. This service is defined in the service-level agreement (SLA), a contract drawn up between the MSP and the customer that defines both the type and standards of services the MSP will be expected to provide. This monthly recurring revenue (MRR) can provide a lucrative and reliable revenue stream.

What Services Can an MSP Provide?

MSPs provide systems management solutions, centrally managing a company’s IT assets. This encompasses everything from software support and maintenance to cloud computing and data storage. These solutions can be especially valuable for small- and medium-sized businesses (SMBs) that may not have robust internal IT departments, especially when it comes to hard-to-find skills.

Network Monitoring and Maintenance

From slow loading times to outages, inefficient and faulty systems can cost companies a fortune in lost productivity. MSPs reduce the likelihood of such delays by keeping an eye on the network for slow or failing elements. By using a remote monitoring and management (RMM) tool, the MSP will automatically be notified the moment an issue arises, allowing them to identify and fix the problem as quickly as possible. That means shorter downtime, so the customer’s tech—and the business needs it supports—can get up and running again in no time.

Software Support and Maintenance

MSPs provide software support and maintenance to ensure the smooth running of all business applications that a customer needs on a daily basis. This includes ensuring that the programs used to maintain the network are fully functional. Overall, the goal is to provide an uninterrupted experience so that work can carry on as normal.

Data Backup and Recovery

Data loss can be catastrophic, so companies need to have a system in place to back it up and recover it, should the worst happen. MSPs can handle the backup process, protecting companies against both accidental deletion and file corruption, or more malicious intent (like cyberattacks). They can also support a company’s overall disaster recovery plan, ensuring the business can always recover its data in the event of an emergency.

Data Storage

MSPs can also help their clients optimally store their data. While hard data storage was once standard, new forms of remote data storage are growing in popularity, including cloud computing. MSPs can enable seamless data migration if the client decides to switch storage options.

Cloud Computing

Cloud computing encompasses more than just remote data storage options. Various IT applications and resources can be accessed via online cloud service platforms, with providers charging a pay-as-you-go fee for access. Whether the client relies on a public, private, or hybrid cloud platform, MSPs can help them navigate the cloud successfully, streamlining their workflows, storing data successfully, and more.

Challenges Facing MSPs

While there are numerous benefits to the managed services model, including the recurring revenue and the ability to build long-lasting relationships with clients, this model isn’t without its challenges.

Shifts in Sales and Marketing

Until recently, many MSPs have grown organically through referrals and word of mouth. But increasingly, companies are seeing the value of the ‘master MSP’ model, which offers valuable infrastructure to other MSPs in areas where their own expertise may be lacking. As a result, we see a trend toward inorganic growth.

In this market, MSPs can stand out from the crowd by investing their efforts in product management. Prioritizing the needs of the customer is a simple way to create value around your services. This goes beyond the basic standards outlined in the service level agreement—it’s about showing you go above and beyond.

Keeping Existing Customers

With new differentiators emerging, MSPs have to adjust their approach to keep customers happy.

One way they can set themselves apart is by having business conversations very early on in the relationship. By gaining a clear understanding of the outcomes the client wants to achieve and working with them to come to an agreement surrounding expectations, MSPs can establish themselves as a partner rather than simply a provider. This will allow you to adjust your approach to match their needs—like driving for profit rather than acting as a cost center.

Best-in-class MSPs also rarely find themselves arguing with customers over whether something is covered. That’s because they’re fully aligned on what the MSP is responsible for. Whatever the SLA covers, it’s the MSP’s job to ensure their client understands. This requires regular conversations to confirm everyone is on the same page and satisfied. Documenting these conversations also allows MSPs to streamline any disagreements by showing what has been discussed and agreed upon. The goal is to become a trusted advisor that they turn to for guidance.

A next-level approach to proactivity is also a plus. This includes setting up alerts to rapidly identify issues and putting new measures in place to ensure mistakes don’t repeat themselves.

Transitioning toward a more risk-based approach, bolstered by a security-first mindset, will go a long way, opening doors for both more recurring and non-recurring revenue streams as clients seek out your consultation. The best MSPs are experts at assessing their customers’ environment and developing a tailored plan that covers governance, compliance, and ongoing risk management. What’s more, they adjust their approach regularly to reflect the ever-changing security needs of their clients—offering more opportunities to showcase their value and up their revenue stream.

The Impact of Cloud Computing

While MSP revenue is rising, profit margins are actually shrinking. Part of the problem is the fact that MSPs are expanding their portfolio of services, yet still relying on their former pricing structures. But many MSPs are making the problem worse by choosing the wrong cloud service vendor to partner with, which can significantly impact an MSPs already-shrinking profit margins.

Some cloud service vendors are simply not priced to support an MSP. And with the pace at which cloud technology is evolving, a process that was cutting-edge when an MSP implemented it could become inefficient within a period of weeks. It’s vital that MSPs be open to change if a vendor becomes unsustainable, lest risk their own services becoming unsustainable as a result.

You should also be ready to address any cloud-related questions and concerns that clients raise. Cloud technology is still relatively new, and it can be confusing, so overcoming any uncertainties will play a key role in an MSP’s ability to act as a valuable advisor to its clients.

How MSPs Use Software

Just as they bring value to their customers by streamlining workflows and protecting networks, MSPs need internal frameworks that increase efficiency.

Professional services automation (PSA) tools allow MSPs to streamline and automate repetitive administrative tasks. This saves time and cuts costs, all while enabling greater scalability.

MSPs can also utilize remote monitoring and management (RMM) tools. These automate the patching process and allow you to reduce time spent on resolving tickets, essentially doing more with less. Not only does this enable a more proactive approach, but it puts time back into the support team’s day to focus on other things.

Needless to say, MSPs should be easily accessible to their clients via technology. Remote desktop support makes that possible. With remote control over a client’s systems, MSPs can rapidly solve issues from wherever they are—without interfering with the end user’s access. This reduces customer downtime, allowing repairs and IT support to happen quietly in the background.

What the Future Holds for MSPs

The role of MSPs is changing. Keeping an eye on these emerging trends can help you anticipate shifting client expectations—and stay ahead of the curve.

Arguably the largest area of opportunity for MSPs is cybersecurity—and that service is only going to grow more valuable. Even as awareness increases and regulations tighten around data privacy laws, the number and complexity of cyberattacks continue to rise. Between 2017 and 2018, the annual cost of combating cybercrime rose by 12%—from $11.7 million to a record high of $13 million—so establishing yourself as a cybersecurity expert now will put you in good stead for the future.

The Internet of Things (IoT) is also going to have a major impact on MSPs. Keeping up with the sheer volume of devices being used on a day-to-day basis requires a dynamic approach to systems management. This includes being proactive about establishing best practices and security guidelines around new technology, such as the use of voice assistants.

Business intelligence offerings are also likely to grow in demand. With the use of IT in business at an all-time high, the amount of data being generated is enormous. But data is only numbers without someone to effectively consolidate and analyze it to extract actionable insights. Providing easy access to reports and KPIs that clearly demonstrate areas for improvement will allow MSPs to not only stay relevant in this data-driven market but become leaders in their field.


This article was provided by our service partner : connectwise.com

cloud services

Cloud Services in the Crosshairs of Cybercrime

It’s a familiar story in tech: new technologies and shifting preferences raise new security challenges. One of the most pressing challenges today involves monitoring and securing all of the applications and data currently undergoing a mass migration to public and private cloud platforms.

Malicious actors are motivated to compromise and control cloud-hosted resources because they can gain access to significant computing power through this attack vector. These resources can then be exploited for a number of criminal money-making schemes, including cryptomining, DDoS extortion, ransomware and phishing campaigns, spam relay, and for issuing botnet command-and-control instructions. For these reasons—and because so much critical and sensitive data is migrating to cloud platforms—it’s essential that talented and well-resourced security teams focus their efforts on cloud security.

The cybersecurity risks associated with cloud infrastructure generally mirror the risks that have been facing businesses online for years: malware, phishing, etc. A common misconception is that compromised cloud services have a less severe impact than more traditional, on-premise compromises. That misunderstanding leads some administrators and operations teams to cut corners when it comes to the security of their cloud infrastructure. In other cases, there is a naïve belief that cloud hosting providers will provide the necessary security for their cloud-hosted services.

Although many of the leading cloud service providers are beginning to build more comprehensive and advanced security offerings into their platforms (often as extra-cost options), cloud-hosted services still require the same level of risk management, ongoing monitoring, upgrades, backups, and maintenance as traditional infrastructure. For example, in a cloud environment, egress filtering is often neglected. But, when egress filtering is invested in, it can foil a number of attacks on its own, particularly when combined with a proven web classification and reputation service. The same is true of management access controls, two-factor authentication, patch management, backups, and SOC monitoring. Web application firewalls, backed by commercial-grade IP reputation services, are another often overlooked layer of protection for cloud services.

Many midsize and large enterprises are starting to look to the cloud for new wide-area network (WAN) options. Again, here lies a great opportunity to enhance the security of your WAN, whilst also achieving the scalability, flexibility, and cost-saving outcomes that are often the primary goals of such projects.  When selecting these types of solutions, it’s important to look at the integrated security options offered by vendors.

Haste makes waste

Another danger of the cloud is the ease and speed of deployment. This can lead to rapidly prototyped solutions being brought into service without adequate oversight from security teams. It can also lead to complacency, as the knowledge that a compromised host can be replaced in seconds may lead some to invest less in upfront protection. But it’s critical that all infrastructure components are properly protected and maintained because attacks are now so highly automated that significant damage can be done in a very short period of time. This applies both to the target of the attack itself and in the form of collateral damage, as the compromised servers are used to stage further attacks.

Finally, the utilitarian value of the cloud is also what leads to its higher risk exposure, since users are focused on a particular outcome (e.g. storage) and processing of large volumes of data at high speeds. Their solutions-based focus may not accommodate a comprehensive end-to-end security strategy well. The dynamic pressures of business must be supported by newer and more dynamic approaches to security that ensure the speed of deployment for applications can be matched by automated SecOps deployments and engagements.

Time for action

If you haven’t recently had a review of how you are securing your resources in the cloud, perhaps now is a good time. Consider what’s allowed in and out of all your infrastructure and how you retake control. Ensure that the solutions you are considering have integrated, actionable threat intelligence for another layer of defense in this dynamic threat environment.


This article was provided by our service partner : webroot.com