vcenter server

Decoding the vCenter Server Lifecycle: Update and Versioning Explained

Have you ever wondered what the difference is between a vCenter Server update and a patch? Or between an upgrade and a migration? Why don’t some vCenter Server versions align? Keep reading for the answers!

Version Numbering

The first thing you should understand is vCenter Server versioning. When reviewing your vCenter Server version’s you may see many different references to versions or builds.

One of the first places you will notice a version identifier, is in our release notes. Here you will see the product version listed as vCenter Server 6.7 Update 2a and the build number listed as 13643870.


Once you have upgraded or deployed your vCenter Server you will see version identifiers such as 6.7.0.31000 listed in the VMware Appliance Management Interface (VAMI). You will also see a build number, such as 13643870.

If you review the version information within your vSphere Client you will see the version listed as 6.7.0 and the build as 13639324.

The reason you will see differing versions among these places are because the release notes show the vCenter Server build and full release name, in the VAMI it will show the vCenter Server Appliance version in addition to the build and in the vSphere Client it will show the vCenter Server version and the build of the vSphere Client.

KB2143838 is a great resource that will explain the breakdown of versioning and builds for all vCenter Server versions.

Now that we have  explained the way versioning works, let’s jump into the different scenarios where VMware will increment a version.

vCenter Server Updates and Patches

What is a vCenter Server Update and how does It differ from a patch?

A vCenter Server Update is one that applies to the vCenter Server application. An update can include new features, bug fixes or updates for additional functionality. vCenter Server updates will have a dedicated set of release notes and will be hosted on the my.vmware.com download portal.

A vCenter Server patch is more much streamlined as these are associated with operating system and security level updates. There are no application related changes, and these can target Photon OS, the Postgres DB, Java versions and any other supporting Linux libraries on the vCenter Server Appliance.

A vCenter Server patch also has no dedicated release notes as these are part of the rolled up VMware vCenter Server Appliance Photon OS Security Patches. Patches are also not stored on the my.vmware.com download portal but on the alternate VMware Patch Portal. It is also very important to note as listed in the release notes, these should not be used for any deployment or upgrade. The only reason the vCenter Server ISO’s are hosted on the VMware Patch Portal is to be used to restore your vCenter Server Appliance if using the built-in File-Based Backup. Patches can also only be applied within one and the same update release. So for example if you are currently on 6.7 Update 1 you would not be able to patch directly to 6.7 Update 2b , you would first update to 6.7 Update  2a and then patch to 6.7 Update 2b.

Now that we have explained the differences between a vCenter Server update and patch we can review the differences between an upgrade and migration.

vCenter Server Upgrades and Migrations

In its simplest form a vCenter Server Upgrade is defined as doing a major version change between vCenter Server Appliance versions. If you are running the vCenter Server Appliance 6.5  in your environment and move to vCenter Server Appliance 6.7 this would be considered an upgrade.

A vCenter Server migration is defined as doing a major version change between vCenter Server for Windows and the vCenter Server Appliance. If you are running vCenter Server for Windows 6.5 and move to the vCenter Server Appliance 6.7 this would be considered a migration. It is not supported to do a migration between the same major version as it consists of both a change of platform and an upgrade together.

In vSphere 6.5 and 6.7 an upgrade or migration of the vCenter Server is not completed in place. During the upgrade process a brand new appliance of the newer version is deployed, and based on the settings defined the data is exported from the old version and imported into the new one retaining the same FQDN, IP, Certs and UUIDs.

A back-in-time upgrade restriction is when you are unable to upgrade from one 6.5 release to another 6.7 release. For example, Upgrade from vSphere 6.5 Update 2d to vSphere 6.7 Update 1 is not supported due to the back-in-time nature of vSphere 6.7 Update 1. vSphere 6.5 Update 2d contains code and security fixes that are not in vSphere 6.7 Update 1 and might cause regression. When performing vCenter Server upgrades and migrations it’s also very important to pay attention to unsupported upgrade paths which are normally restricted due to being a back-in-time upgrade. It is also important to note that just because two releases might have the same release date, does not mean that they will be compatible. The best resource to review supported upgrade paths will be in the vCenter Server Release Notes section titled Upgrade Notes for this Release.

Resource Wrap-Up

 Conclusion

Versioning of a complex product can be difficult, but hopefully you now have a better understanding of what these numbers mean. If you have any questions feel free to post a comment below or check out any of the resources linked.


This article was provided by our service partner : Vmware

How to create a Failover Cluster in Windows Server 2019

This article gives a short overview of how to create a Microsoft Windows Failover Cluster (WFC) with Windows Server 2019 or 2016. The result will be a two-node cluster with one shared disk and a cluster compute resource (computer object in Active Directory).

Windows server 2019 failover cluster

Preparation

It does not matter whether you use physical or virtual machines, just make sure your technology is suitable for Windows clusters. Before you start, make sure you meet the following prerequisites:

Two Windows 2019 machines with the latest updates installed. The machines have at least two network interfaces: one for production traffic, one for cluster traffic. In my example, there are three network interfaces (one additional for iSCSI traffic). I prefer static IP addresses, but you can also use DHCP.

failover cluster 02

Join both servers to your Microsoft Active Directory domain and make sure that both servers see the shared storage device available in disk management. Don’t bring the disk online yet.

The next step before we can really start is to add the “Failover clustering” feature (Server Manager > add roles and features).

Reboot your server if required. As an alternative, you can also use the following PowerShell command:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

After a successful installation, the Failover Cluster Manager appears in the start menu in the Windows Administrative Tools.

After you installed the Failover-Clustering feature, you can bring the shared disk online and format it on one of the servers. Don’t change anything on the second server. On the second server, the disk stays offline.

After a refresh of the disk management, you can see something similar to this:

Server 1 Disk Management (disk status online)


Server 2 Disk Management (disk status offline)

Failover Cluster readiness check

Before we create the cluster, we need to make sure that everything is set up properly. Start the Failover Cluster Manager from the start menu and scroll down to the management section and click Validate Configuration.

Select the two servers for validation.

Run all tests. There is also a description of which solutions Microsoft supports.

After you made sure that every applicable test passed with the status “successful,” you can create the cluster by using the checkbox Create the cluster now using the validated nodes, or you can do that later. If you have errors or warnings, you can use the detailed report by clicking on View Report.

Create the cluster

If you choose to create the cluster by clicking on Create Cluster in the Failover Cluster Manager, you will be prompted again to select the cluster nodes. If you use the Create the cluster now using the validated nodes checkbox from the cluster validation wizard, then you will skip that step. The next relevant step is to create the Access Point for Administering the Cluster. This will be the virtual object that clients will communicate with later. It is a computer object in Active Directory.

The wizard asks for the Cluster Name and IP address configuration.

As a last step, confirm everything and wait for the cluster to be created.

The wizard will add the shared disk automatically to the cluster per default. If you did not configure it yet, then it is also possible afterwards.

As a result, you can see a new Active Directory computer object named WFC2019.

You can ping the new computer to check whether it is online (if you allow ping on the Windows firewall).

As an alternative, you can create the cluster also with PowerShell. The following command will also add all eligible storage automatically:

New-Cluster -Name WFC2019 -Node SRV2019-WFC1, SRV2019-WFC2 -StaticAddress 172.21.237.32

You can see the result in the Failover Cluster Manager in the Nodes and Storage > Disks sections.

The picture shows that the disk is currently used as a quorum. As we want to use that disk for data, we need to configure the quorum manually. From the cluster context menu, choose More Actions > Configure Cluster Quorum Settings.

Here, we want to select the quorum witness manually.

Currently, the cluster is using the disk configured earlier as a disk witness. Alternative options are the file share witness or an Azure storage account as witness. We will use the file share witness in this example. There is a step-by-step how-to on the Microsoft website for the cloud witness. I always recommend configuring a quorum witness for proper operations. So, the last option is not really an option for production.

Just point to the path and finish the wizard.

After that, the shared disk is available for use for data.

Congratulations, you have set up a Microsoft failover cluster with one shared disk.

Next steps and backup

One of the next steps would be to add a role to the cluster, which is out of scope of this article. As soon as the cluster contains data, it is also time to think about backing up the cluster. Veeam Agent for Microsoft Windows can back up Windows failover clusters with shared disks. We also recommend doing backups of the “entire system” of the cluster. This also backs up the operating systems of the cluster members. This helps to speed up restore of a failed cluster node, as you don’t need to search for drivers, etc. in case of a restore.


This article was provided by our service partner : Veeam

Security

Is Being Secure and Being Compliant the Same Thing?

Often times, when I ask a partner if they’re offering security to their SMB customers, their answer revolves around consulting on compliance. Verticals like healthcare, financial, government, and retail are low-hanging fruit for security revenue opportunities because compliance is a requirement of being in business.

However, being secure and being compliant are NOT the same. Did you know that you can be compliant without being fully secure? While being compliant increases data protection and keeps organizations from paying hefty fines, it’s simply not enough. If that’s what you’re relying on to keep you and your customers safe, you’d be sorely mistaken.

Being compliant is like following a strict nutritionist-approved diet to stay healthy.

While that’s a good practice, and it will certainly help, it’s also very important that you know your family’s medical history and how that could impact your health in the future (your risks) so you can make necessary, and maybe even lifesaving decisions. If you ignored your risks and only stuck to a good diet, you might be blindsided at a doctor’s appointment to learn that you have a certain hereditary disease.

“If we had only caught this sooner…”

Many MSPs are approaching security when an incident occurs, while others are being proactive to meet their customer’s compliance requirements. They’re not thinking of the broader picture of risk. You need to fully understand your risks to ensure that you and your customers are secure. Don’t wait until disaster strikes.

Let’s dive into the differences between the two phrases.

Being Compliant

What does it mean to be compliant? Is that enough?

Regulatory compliance describes the goal that organizations aspire to achieve in their efforts to ensure they are aware of and take steps to comply with relevant laws, policies, and regulations, such as PCI, HIPAA, GDPR, and DFARS.

We’ve heard of several companies making news headlines regarding security breaches. The court will determine if there was negligence in adhering to regulations and taking the proper legally required steps to protect their data properly. If the company is found not to be compliant, there are heavy financial consequences.

How much are we talking? Yahoo’s loss of 3 billion user accounts cost them an estimated $350 million off their sales price.

Needless to say, there’s a big incentive for companies to cover the basics when it comes to security. However, if you stop at just being compliant, you’re essentially only doing the bare minimum, whatever is legally required.

It’s a starting point.

Being Secure

The next step is to ensure security. Go above and beyond.

According to Cisco, “Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes.”

When hackers attack your business, it’s not just your business that’s at stake. By getting access to your database, hackers gain access to all your customers. So, we could consider ensuring cybersecurity as a social responsibility (not just a legal one).

We believe in doing business this way, going above and beyond, and have adopted the NIST Cybersecurity Framework. It consists of standards, guidelines, and best practices to manage cybersecurity-related risks as an ongoing practice.

As leaders in the IT industry, we’re all constantly looking to others who are doing things well and subscribe to best practices in several other areas of business. Cybersecurity is no different.

The framework encourages identifying your risks proactively, so you can take the necessary steps in reducing and managing your risks.

How to Assess Risks

We know what you’re thinking, “Easier said than done, though, right? Just another thing to add to my to-do list.”

This process doesn’t have to be overwhelming. Knowing where to start is half the battle. Smart security offerings start with a risk assessment that allows you to proactively identify security risks across your entire business as well as your customers, not just on their network. The result is an easy-to-understand, customized risk report showing your customer their most critical risks and recommendations for how to remediate those risks.

Next Steps

The bottom line: be compliant AND secure. Start by understanding your legal compliance responsibilities to protect yourself and your customers during a disaster. Then, take it a step further—assess and fully understand your security risks and develop a plan to reduce your risks.


This article was provided by our service partner : Connectwise

vmware vsphere

10 Things To Know About vSphere Certificate Management

With security and compliance on the minds of IT staff everywhere, vSphere certificate management is a huge topic. Decisions made can seriously affect the effort it takes to support a vSphere deployment, and often create vigorous discussions between CISO and information security staff, virtualization admins, and enterprise PKI/certificate authority admins. Here are ten things that organizations should consider when choosing a path forward.

1. Certificates are about encryption and trust

Certificates are based on public key cryptography, a technique developed by mathematicians in the 1970s, both in the USA and Britain. These techniques allow someone to create two mathematical “keys,” public and private. You can share the public key with another person, who can then use it to encrypt a message that can only be read by the person with the private key.

When we think about certificates we often think of the little padlock icon in our browser, next to the URL. Green and locked means safe, and red with an ‘X’ and a big “your connection is not private” warning means we’re not safe, right? Unfortunately, it’s a lot more complicated than that. A lot of things need to be true for that icon to turn green.

When we’re using HTTPS the communications between our web browser and a server are sent across a connection protected with Transport Layer Security (TLS). TLS is the successor to Secure Sockets Layer, or SSL, but we often refer to them interchangeably. TLS has four versions now:

  • Version 1.0 has vulnerabilities, is insecure, and shouldn’t be used anymore.
  • Version 1.1 doesn’t have the vulnerabilities as 1.0 but it uses MD5 and SHA-1 algorithms which are both insecure.
  • Version 1.2 adds AES cryptographic ciphers that are faster, removes some insecure ciphers, and switches to SHA-256. It is the current standard.
  • Version 1.3 removes weak ciphers and adds features that increase the speed of connections. It is the upcoming standard.

Using TLS means that your connection is encrypted, even if the certificates are self-signed and generate warnings. Signing a certificate means that someone vouches for that certificate, in much the same way as a trusted friend would introduce someone new to you. A self-signed certificate simply means that it’s vouching for itself, not unlike a random person on the street approaching you and telling you that they are trustworthy. Are they? Maybe, but maybe not. You don’t know without additional information.

To get the green lock icon you need to trust the certificate by trusting who signed it. This is where a Certificate Authority (CA) comes in. Certificate Authorities are usually specially selected and subject to rigorous security protocols, because they’re trusted implicitly by web browsers. Having a CA sign a certificate means you inherit the trust from the CA. The browser lock turns green and everything seems secure.

Having a third-party CA sign certificates can be expensive and time-consuming, especially if you need a lot of them (and nowadays you do). As a result, many enterprises create their own CAs, often using the Microsoft Active Directory Certificate Services, and teach their web browsers and computers to trust certificates signed by that CA by importing the “root CA certificates” into the operating systems.

2. vSphere uses certificates extensively

All communications inside vSphere are protected with TLS. These are mainly:

  • ESXi certificates, issued to the management interfaces on all the hosts.
  • “Machine” SSL certificates used to protect the human-facing components, like the web-based vSphere Client and the SSO login pages on the Platform Service Controllers (PSCs).
  • “Solution” user certificates used to protect the communications of other products, like vRealize Operations Manager, vSphere Replication, and so on.

The vSphere documentation has a full list. The important point here is that in a fully-deployed cluster the number of certificates can easily reach into the hundreds.

3. vSphere has a built-in certificate authority

Managing hundreds of certificates can be quite a daunting task, so VMware created the VMware Certificate Authority (VMCA). It is a supported and trusted component of vSphere that runs on a PSC or on the vCenter VCSA in embedded mode. Its job is to automate the management of certificates that are used inside a vSphere deployment. For example, when a new host is attached to vCenter it asks you to verify the thumbprint of the host ESXi certificate, and once you confirm it’s correct the VMCA will automatically replace the certificates with ones issued by the VMCA itself. A similar thing happens when additional software, like vRealize Operations Manager or VMware AppDefense is installed.

The VMCA is part of the vCenter infrastructure and is trusted in the same way vCenter is. It’s patched when you patch your PSCs and VCSAs. It is sometimes criticized as not being a full-fledged CA but it is just-enough-CA, purpose-built to serve a vSphere deployment securely, safely, and in an automated way to make it easy to be secure.

4. There are four main ways to manage certificates

  1. First, you can just use a self-signed CA certificate. The VMCA is fully-functional once vCenter is installed and automatically creates root certificates to use for signing ESXi, machine, and solution certificates. You can download the root certificates from the main vCenter web page and import them into your operating systems to establish trust and turn the browser lock icon green for both vCenter and ESXi. This is the easiest solution but it requires you to accept a self-signed CA root certificate. Remember, though, that we trust vCenter, so we trust the VMCA.
  2. Second, you can make the VMCA an intermediate, or subordinate, CA. We do not recommend this (see below).
  3. Third, you can disable the VMCA and use custom certificates for everything. To do this you can ask the certificate-manager tool to generate Certificate Signing Requests (CSRs) for everything. You take those to a third-party CA, have them signed, and then install them all manually. This is time-consuming and error-prone.
  4. Fourth, you can use “hybrid” mode to replace the machine certificates (the human-facing certificates for vCenter) with custom certificates, and let the VMCA manage everything else with its self-signed CA root certificates. All users of vCenter would then see valid, trusted certificates. If the virtualization infrastructure admin team desires they can import the CA root certificates to just their workstations and then they’ll have green lock icons for ESXi, too, as well as warnings if there is an untrusted certificate. This is the recommended solution for nearly all customers because it balances the desire for vCenter security with the realities of automation and management.

5. Enterprise CAs are self-signed, too

“But wait,” you might be thinking, “we are trying to get rid of self-signed certificates, and you’re advocating their use.” True, but think about it this way: enterprise CAs are self-signed, too, and you have decided to trust them. Now you simply have two CAs, and while that might seem like a problem it really means that a separation exists between the operators of the enterprise CA and the virtualization admin team, for security, organizational politics, staff workload management, and even troubleshooting. Because we trust vCenter, as the core of our virtualization management, we also implicitly trust the VMCA.

6. Don’t create an intermediate CA

You can create an intermediate CA, also known as a subordinate CA, by issuing the VMCA a root CA certificate capable of signing certificates on behalf of the enterprise CA and using the Certificate Manager to import it. While this has applications, it is generally regarded as unsafe because anybody with access to that CA root key pair can now issue certificates as the enterprise CA. We recommend maintaining the security & trust separation between the enterprise CA and the VMCA and not using the intermediate CA functionality.

7. You can change the information on the self-signed CA root certificate

Using the Certificate Manager utility you can generate new VMCA root CA certificates with your own organizational information in them, and the tool will automate the reissue and replacement of all the certificates. This is a popular option with the Hybrid mode, as it makes the self-signed certificates customized and easy to identify. You can also change the expiration dates if you dislike the defaults.

8. Test, test, test!

The only way to truly be comfortable with these types of changes is to test them first. The best way to test is with a nested vSphere environment, where you install a test VCSA as well as ESXi inside a VM. This is an incredible way to test vSphere, especially if you shut it down and take a snapshot of it. Then, no matter what you do, you can restore the test environment to a known good state. See the links at the end for more information on nested ESXi.

Another interesting option is using the VMware Hands-on Labs to experiment with this. Not only are the labs a great way to learn about VMware products year-round, they’re also great for trying unscripted things out in a low-risk way. Try the new vSphere 6.7 Lightning Lab!

9. Make backups

When the time comes to do this for real make sure you have a good file-based backup of your vCenter and PSCs using the VAMI interface. Additionally, the Certificate Manager utility backs up the old certificates, so you can restore them if needed (only one set, though, so think that through). This way you can restore them if things go wrong. If things do not go as planned or tested know that these operations are fully supported by VMware Global Support Services, who can walk you through resolving any problem you might encounter.

10. Know why you’re doing this

In the end the choice of how you manage vSphere certificates depends on what your goals are.

  • Do you want that green lock icon?
  • Does everybody need the green lock icon for ESXi, or just the virtualization admin team?
  • Do you want to get rid of self-signed certificates, or are you more interested in establishing trust?
  • Why do you trust vCenter as the core of your infrastructure but not a subcomponent of vCenter?
  • What is the difference in trust between the enterprise self-signed CA root and the VMCA self-signed CA root?
  • Is this about compliance, and does the compliance framework truly require custom CA certificates?
  • What is the cost, in staff time and opportunity cost, of ignoring the automated certificate solution in favor of manual replacements?
  • Does the solution decrease or increase risk, and why?

Whatever you decide know that thousands of organizations across the world have asked the same questions, and out of the discussions have come good understandings of certificates & trust as well as better relations between security and virtualization admin teams.


This article was provided by our service partner : Vmware

veeam

Veeam : Set up vSphere RBAC for self-service backup portal

Wouldn’t it be great to empower VMware vSphere users to take control of their backups and restores with a self-service portal? The good news is you can as of Veeam Backup & Replication 9.5 Update 4. This feature is great because it eliminates operational overhead and allows users to get exactly what they want when they want it. It is a perfect augmentation for any development team taking advantage of VMware vSphere virtual machines.

Introducing vSphere role-based access control (RBAC) for self-service

vSphere RBAC allows backup administrators to provide granular access to vSphere users using the vSphere permissions already in place. If a user does not have permissions to virtual machines in vCenter, they will not be able to access them via the Self-Service Backup Portal.

Additionally, to make things even simpler for vSphere users, they can create backup jobs for their VMs based on pre-created job templates. They will not have to deal with advanced settings they are not familiar with (This is a really big deal by the way).vSphere users can then monitor and control the backup jobs they have created using the Enterprise Manager UI, and restore their backups as needed.

Setting up vSphere RBAC for self-service

Setting up vSphere RBAC for self-service could not be easier. In the Enterprise Manager configuration screen, a Veeam administrator simply has to navigate to “Configuration – Self-service.” Then, he should add the vSphere user’s account, specify a backup repository, set a quota, and select the delegation method. These permissions can also be applied at the group level for enhanced ease of administration too.

Besides VMware vCenter Roles, vSphere privileges or vSphere tags can be used as the delegation method. vSphere tags is one of my favorite methods to use since tags can be applied to either reach a very broad or very granular set of permissions. The ability to use vSphere tags is especially helpful for new VMware vSphere deployments, since it provides quick, easy, and secure access to virtual machine users for this case.

For example, I could set vSphere tags at a vSphere cluster level if I had a development cluster, or I could set vSphere tags on a subset of virtual machines using a tag such as “KryptonSOAR Development” to only provide access to development virtual machines.

After setting the Delegation Mode, the user account can be edited to select the vSphere tag, vCenter server role, or VM privilege. From the Edit screen, the repository and quota can also be changed at any time if required.

Using RBAC for VMware vSphere

After this very simple configuration, vSphere users simply need to log into the Self-Service Backup Portal to begin protecting and recovering their virtual machines. The URL can be shared across the entire organization: https://<EnterpriseManagerServer>:9443/backup, thus giving everyone a very convenient way of managing their workloads. Job creation and viewing in the Self-Service Backup Portal is extremely user friendly, even for those who have never backed up a virtual machine before! When creating a new backup job, users will only see the virtual machines they have access to, which makes the solution more secure and less confusing.

There is even a helpful dashboard, so users can monitor their backup jobs and the amount of backup storage they are consuming.

Enabling vSphere users to back up and restore virtual machines empowers them in new ways, especially when it comes to DevOps and rapid development cycles. Best of all, Veeam’s self-service implementation leverages the VMware vSphere permissions framework organizations already have in place, reducing operational complexity for everyone involved.

When it comes to VM recovery, there are also many self-service options available. Users can independently navigate to “VMs” tab to perform full VM restores. Again, the process is very easy as the user should decide whether to preserve the original VM if Veeam detects it or to overwrite its data, select the desired restore point, and specify whether it should be powered on after this procedure. Three simple actions and the data is on its way.

In addition to that, the portal makes file- and application-level recovery very convenient too. There are quite a few scenarios available and what’s really great about it is that users can navigate into the file system tree via the file explorer. They can utilize a search engine with advanced filters for both indexed and non-indexed guest OS file systems. Under the hood, Veeam is going to decide how exactly the operation should be handled but the user won’t even know about it. There is no chance the sought-for document can slip here. The cherry on top is that Veeam provides recovery of application-aware SQL and Oracle backups, thus making your DBAs happy without giving them too many rights for the virtual environments.


This article was provided by our service partner : Veeam