With security and compliance on the minds of IT staff everywhere, vSphere certificate management is a huge topic. Decisions made can seriously affect the effort it takes to support a vSphere deployment, and often create vigorous discussions between CISO and information security staff, virtualization admins, and enterprise PKI/certificate authority admins. Here are ten things that organizations should consider when choosing a path forward.
1. Certificates are about encryption and trust
Certificates are based on public key cryptography, a technique developed by mathematicians in the 1970s, both in the USA and Britain. These techniques allow someone to create two mathematical “keys,” public and private. You can share the public key with another person, who can then use it to encrypt a message that can only be read by the person with the private key.
When we think about certificates we often think of the little padlock icon in our browser, next to the URL. Green and locked means safe, and red with an ‘X’ and a big “your connection is not private” warning means we’re not safe, right? Unfortunately, it’s a lot more complicated than that. A lot of things need to be true for that icon to turn green.
When we’re using HTTPS the communications between our web browser and a server are sent across a connection protected with Transport Layer Security (TLS). TLS is the successor to Secure Sockets Layer, or SSL, but we often refer to them interchangeably. TLS has four versions now:
Version 1.0 has vulnerabilities, is insecure, and shouldn’t be used anymore.
Version 1.1 doesn’t have the vulnerabilities as 1.0 but it uses MD5 and SHA-1 algorithms which are both insecure.
Version 1.2 adds AES cryptographic ciphers that are faster, removes some insecure ciphers, and switches to SHA-256. It is the current standard.
Version 1.3 removes weak ciphers and adds features that increase the speed of connections. It is the upcoming standard.
Using TLS means that your connection is encrypted, even if the certificates are self-signed and generate warnings. Signing a certificate means that someone vouches for that certificate, in much the same way as a trusted friend would introduce someone new to you. A self-signed certificate simply means that it’s vouching for itself, not unlike a random person on the street approaching you and telling you that they are trustworthy. Are they? Maybe, but maybe not. You don’t know without additional information.
To get the green lock icon you need to trust the certificate by trusting who signed it. This is where a Certificate Authority (CA) comes in. Certificate Authorities are usually specially selected and subject to rigorous security protocols, because they’re trusted implicitly by web browsers. Having a CA sign a certificate means you inherit the trust from the CA. The browser lock turns green and everything seems secure.
Having a third-party CA sign certificates can be expensive and time-consuming, especially if you need a lot of them (and nowadays you do). As a result, many enterprises create their own CAs, often using the Microsoft Active Directory Certificate Services, and teach their web browsers and computers to trust certificates signed by that CA by importing the “root CA certificates” into the operating systems.
2. vSphere uses certificates extensively
All communications inside vSphere are protected with TLS. These are mainly:
ESXi certificates, issued to the management interfaces on all the hosts.
“Machine” SSL certificates used to protect the human-facing components, like the web-based vSphere Client and the SSO login pages on the Platform Service Controllers (PSCs).
“Solution” user certificates used to protect the communications of other products, like vRealize Operations Manager, vSphere Replication, and so on.
The vSphere documentation has a full list. The important point here is that in a fully-deployed cluster the number of certificates can easily reach into the hundreds.
3. vSphere has a built-in certificate authority
Managing hundreds of certificates can be quite a daunting task, so VMware created the VMware Certificate Authority (VMCA). It is a supported and trusted component of vSphere that runs on a PSC or on the vCenter VCSA in embedded mode. Its job is to automate the management of certificates that are used inside a vSphere deployment. For example, when a new host is attached to vCenter it asks you to verify the thumbprint of the host ESXi certificate, and once you confirm it’s correct the VMCA will automatically replace the certificates with ones issued by the VMCA itself. A similar thing happens when additional software, like vRealize Operations Manager or VMware AppDefense is installed.
The VMCA is part of the vCenter infrastructure and is trusted in the same way vCenter is. It’s patched when you patch your PSCs and VCSAs. It is sometimes criticized as not being a full-fledged CA but it is just-enough-CA, purpose-built to serve a vSphere deployment securely, safely, and in an automated way to make it easy to be secure.
First, you can just use a self-signed CA certificate. The VMCA is fully-functional once vCenter is installed and automatically creates root certificates to use for signing ESXi, machine, and solution certificates. You can download the root certificates from the main vCenter web page and import them into your operating systems to establish trust and turn the browser lock icon green for both vCenter and ESXi. This is the easiest solution but it requires you to accept a self-signed CA root certificate. Remember, though, that we trust vCenter, so we trust the VMCA.
Second, you can make the VMCA an intermediate, or subordinate, CA. We do not recommend this (see below).
Third, you can disable the VMCA and use custom certificates for everything. To do this you can ask the certificate-manager tool to generate Certificate Signing Requests (CSRs) for everything. You take those to a third-party CA, have them signed, and then install them all manually. This is time-consuming and error-prone.
Fourth, you can use “hybrid” mode to replace the machine certificates (the human-facing certificates for vCenter) with custom certificates, and let the VMCA manage everything else with its self-signed CA root certificates. All users of vCenter would then see valid, trusted certificates. If the virtualization infrastructure admin team desires they can import the CA root certificates to just their workstations and then they’ll have green lock icons for ESXi, too, as well as warnings if there is an untrusted certificate. This is the recommended solution for nearly all customers because it balances the desire for vCenter security with the realities of automation and management.
5. Enterprise CAs are self-signed, too
“But wait,” you might be thinking, “we are trying to get rid of self-signed certificates, and you’re advocating their use.” True, but think about it this way: enterprise CAs are self-signed, too, and you have decided to trust them. Now you simply have two CAs, and while that might seem like a problem it really means that a separation exists between the operators of the enterprise CA and the virtualization admin team, for security, organizational politics, staff workload management, and even troubleshooting. Because we trust vCenter, as the core of our virtualization management, we also implicitly trust the VMCA.
6. Don’t create an intermediate CA
You can create an intermediate CA, also known as a subordinate CA, by issuing the VMCA a root CA certificate capable of signing certificates on behalf of the enterprise CA and using the Certificate Manager to import it. While this has applications, it is generally regarded as unsafe because anybody with access to that CA root key pair can now issue certificates as the enterprise CA. We recommend maintaining the security & trust separation between the enterprise CA and the VMCA and not using the intermediate CA functionality.
7. You can change the information on the self-signed CA root certificate
Using the Certificate Manager utility you can generate new VMCA root CA certificates with your own organizational information in them, and the tool will automate the reissue and replacement of all the certificates. This is a popular option with the Hybrid mode, as it makes the self-signed certificates customized and easy to identify. You can also change the expiration dates if you dislike the defaults.
8. Test, test, test!
The only way to truly be comfortable with these types of changes is to test them first. The best way to test is with a nested vSphere environment, where you install a test VCSA as well as ESXi inside a VM. This is an incredible way to test vSphere, especially if you shut it down and take a snapshot of it. Then, no matter what you do, you can restore the test environment to a known good state. See the links at the end for more information on nested ESXi.
Another interesting option is using the VMware Hands-on Labs to experiment with this. Not only are the labs a great way to learn about VMware products year-round, they’re also great for trying unscripted things out in a low-risk way. Try the new vSphere 6.7 Lightning Lab!
9. Make backups
When the time comes to do this for real make sure you have a good file-based backup of your vCenter and PSCs using the VAMI interface. Additionally, the Certificate Manager utility backs up the old certificates, so you can restore them if needed (only one set, though, so think that through). This way you can restore them if things go wrong. If things do not go as planned or tested know that these operations are fully supported by VMware Global Support Services, who can walk you through resolving any problem you might encounter.
10. Know why you’re doing this
In the end the choice of how you manage vSphere certificates depends on what your goals are.
Do you want that green lock icon?
Does everybody need the green lock icon for ESXi, or just the virtualization admin team?
Do you want to get rid of self-signed certificates, or are you more interested in establishing trust?
Why do you trust vCenter as the core of your infrastructure but not a subcomponent of vCenter?
What is the difference in trust between the enterprise self-signed CA root and the VMCA self-signed CA root?
Is this about compliance, and does the compliance framework truly require custom CA certificates?
What is the cost, in staff time and opportunity cost, of ignoring the automated certificate solution in favor of manual replacements?
Does the solution decrease or increase risk, and why?
Whatever you decide know that thousands of organizations across the world have asked the same questions, and out of the discussions have come good understandings of certificates & trust as well as better relations between security and virtualization admin teams.
This article was provided by our service partner : Vmware
http://www.netcal.com/wp-content/uploads/2019/06/Platinum-Shield-356x256-220x158.png158220Conal Mullanhttp://www.netcal.com/wp-content/uploads/2015/11/netcal_logo2.gifConal Mullan2019-06-12 01:26:502019-06-12 01:28:0310 Things To Know About vSphere Certificate Management
Wouldn’t it be great to empower VMware vSphere users to take control of their backups and restores with a self-service portal? The good news is you can as of Veeam Backup & Replication 9.5 Update 4. This feature is great because it eliminates operational overhead and allows users to get exactly what they want when they want it. It is a perfect augmentation for any development team taking advantage of VMware vSphere virtual machines.
Introducing vSphere role-based access control (RBAC) for self-service
vSphere RBAC allows backup administrators to provide granular access to vSphere users using the vSphere permissions already in place. If a user does not have permissions to virtual machines in vCenter, they will not be able to access them via the Self-Service Backup Portal.
Additionally, to make things even simpler for vSphere users, they can create backup jobs for their VMs based on pre-created job templates. They will not have to deal with advanced settings they are not familiar with (This is a really big deal by the way).vSphere users can then monitor and control the backup jobs they have created using the Enterprise Manager UI, and restore their backups as needed.
Setting up vSphere RBAC for self-service
Setting up vSphere RBAC for self-service could not be easier. In the Enterprise Manager configuration screen, a Veeam administrator simply has to navigate to “Configuration – Self-service.” Then, he should add the vSphere user’s account, specify a backup repository, set a quota, and select the delegation method. These permissions can also be applied at the group level for enhanced ease of administration too.
Besides VMware vCenter Roles, vSphere privileges or vSphere tags can be used as the delegation method. vSphere tags is one of my favorite methods to use since tags can be applied to either reach a very broad or very granular set of permissions. The ability to use vSphere tags is especially helpful for new VMware vSphere deployments, since it provides quick, easy, and secure access to virtual machine users for this case.
For example, I could set vSphere tags at a vSphere cluster level if I had a development cluster, or I could set vSphere tags on a subset of virtual machines using a tag such as “KryptonSOAR Development” to only provide access to development virtual machines.
After setting the Delegation Mode, the user account can be edited to select the vSphere tag, vCenter server role, or VM privilege. From the Edit screen, the repository and quota can also be changed at any time if required.
Using RBAC for VMware vSphere
After this very simple configuration, vSphere users simply need to log into the Self-Service Backup Portal to begin protecting and recovering their virtual machines. The URL can be shared across the entire organization: https://<EnterpriseManagerServer>:9443/backup, thus giving everyone a very convenient way of managing their workloads. Job creation and viewing in the Self-Service Backup Portal is extremely user friendly, even for those who have never backed up a virtual machine before! When creating a new backup job, users will only see the virtual machines they have access to, which makes the solution more secure and less confusing.
There is even a helpful dashboard, so users can monitor their backup jobs and the amount of backup storage they are consuming.
Enabling vSphere users to back up and restore virtual machines empowers them in new ways, especially when it comes to DevOps and rapid development cycles. Best of all, Veeam’s self-service implementation leverages the VMware vSphere permissions framework organizations already have in place, reducing operational complexity for everyone involved.
When it comes to VM recovery, there are also many self-service options available. Users can independently navigate to “VMs” tab to perform full VM restores. Again, the process is very easy as the user should decide whether to preserve the original VM if Veeam detects it or to overwrite its data, select the desired restore point, and specify whether it should be powered on after this procedure. Three simple actions and the data is on its way.
In addition to that, the portal makes file- and application-level recovery very convenient too. There are quite a few scenarios available and what’s really great about it is that users can navigate into the file system tree via the file explorer. They can utilize a search engine with advanced filters for both indexed and non-indexed guest OS file systems. Under the hood, Veeam is going to decide how exactly the operation should be handled but the user won’t even know about it. There is no chance the sought-for document can slip here. The cherry on top is that Veeam provides recovery of application-aware SQL and Oracle backups, thus making your DBAs happy without giving them too many rights for the virtual environments.
This article was provided by our service partner : Veeam
http://www.netcal.com/wp-content/uploads/2016/04/fb_600x315_default.png315600Conal Mullanhttp://www.netcal.com/wp-content/uploads/2015/11/netcal_logo2.gifConal Mullan2019-06-10 06:26:302019-06-10 06:26:30Veeam : Set up vSphere RBAC for self-service backup portal
Managed services are becoming an increasingly integral part of the business IT ecosystem. With technology advancing at a rapid pace, many companies find it cheaper and more effective to outsource some or all of their IT processes and functions to an expert provider, known as a managed service provider (MSP).
Unlike traditional on-demand IT outsourcing, MSPs proactively support a company’s IT needs. And with the IT demands of businesses becoming ever more complex, reliance on MSPs is likely to increase exponentially over the next few years.
What Is a Managed Service Provider?
An MSP manages a company’s IT infrastructure on a subscription-based model. MSPs offer continual support that can include the setup, installation, and configuration of a company’s IT assets.
Managed services can supplement a company’s internal IT department and provide services that may not be available in-house. And since the MSP is continuously supporting the company’s IT infrastructure and systems, rather than simply stepping in from time to time to put out a fire, these services can provide a level of peace of mind that other models just can’t match.
What’s the Difference Between Managed Services and the Break/Fix Model?
Unlike on-demand outsourced IT services, managed services play an ongoing and harmonious role in the running of an organization.
Due to the rapidly changing nature of the digital landscape, it’s no longer sustainable to fix problems after the damage is done. Yet the break/fix model is still a common way of dealing with IT-related problems. It’s like waiting to repair a minor leak until after the pipe has burst.
On-demand providers are usually brought in to perform a specific service (like fixing a broken server), and they bill the customer for the time and materials it takes to provide that service. MSPs, on the other hand, charge a recurring fee to provide an ongoing service. This service is defined in the service-level agreement (SLA), a contract drawn up between the MSP and the customer that defines both the type and standards of services the MSP will be expected to provide. This monthly recurring revenue (MRR) can provide a lucrative and reliable revenue stream.
What Services Can an MSP Provide?
MSPs provide systems management solutions, centrally managing a company’s IT assets. This encompasses everything from software support and maintenance to cloud computing and data storage. These solutions can be especially valuable for small- and medium-sized businesses (SMBs) that may not have robust internal IT departments, especially when it comes to hard-to-find skills.
Network Monitoring and Maintenance
From slow loading times to outages, inefficient and faulty systems can cost companies a fortune in lost productivity. MSPs reduce the likelihood of such delays by keeping an eye on the network for slow or failing elements. By using a remote monitoring and management (RMM) tool, the MSP will automatically be notified the moment an issue arises, allowing them to identify and fix the problem as quickly as possible. That means shorter downtime, so the customer’s tech—and the business needs it supports—can get up and running again in no time.
Software Support and Maintenance
MSPs provide software support and maintenance to ensure the smooth running of all business applications that a customer needs on a daily basis. This includes ensuring that the programs used to maintain the network are fully functional. Overall, the goal is to provide an uninterrupted experience so that work can carry on as normal.
Data Backup and Recovery
Data loss can be catastrophic, so companies need to have a system in place to back it up and recover it, should the worst happen. MSPs can handle the backup process, protecting companies against both accidental deletion and file corruption, or more malicious intent (like cyberattacks). They can also support a company’s overall disaster recovery plan, ensuring the business can always recover its data in the event of an emergency.
MSPs can also help their clients optimally store their data. While hard data storage was once standard, new forms of remote data storage are growing in popularity, including cloud computing. MSPs can enable seamless data migration if the client decides to switch storage options.
Cloud computing encompasses more than just remote data storage options. Various IT applications and resources can be accessed via online cloud service platforms, with providers charging a pay-as-you-go fee for access. Whether the client relies on a public, private, or hybrid cloud platform, MSPs can help them navigate the cloud successfully, streamlining their workflows, storing data successfully, and more.
Challenges Facing MSPs
While there are numerous benefits to the managed services model, including the recurring revenue and the ability to build long-lasting relationships with clients, this model isn’t without its challenges.
Shifts in Sales and Marketing
Until recently, many MSPs have grown organically through referrals and word of mouth. But increasingly, companies are seeing the value of the ‘master MSP’ model, which offers valuable infrastructure to other MSPs in areas where their own expertise may be lacking. As a result, we see a trend toward inorganic growth.
In this market, MSPs can stand out from the crowd by investing their efforts in product management. Prioritizing the needs of the customer is a simple way to create value around your services. This goes beyond the basic standards outlined in the service level agreement—it’s about showing you go above and beyond.
Keeping Existing Customers
With new differentiators emerging, MSPs have to adjust their approach to keep customers happy.
One way they can set themselves apart is by having business conversations very early on in the relationship. By gaining a clear understanding of the outcomes the client wants to achieve and working with them to come to an agreement surrounding expectations, MSPs can establish themselves as a partner rather than simply a provider. This will allow you to adjust your approach to match their needs—like driving for profit rather than acting as a cost center.
Best-in-class MSPs also rarely find themselves arguing with customers over whether something is covered. That’s because they’re fully aligned on what the MSP is responsible for. Whatever the SLA covers, it’s the MSP’s job to ensure their client understands. This requires regular conversations to confirm everyone is on the same page and satisfied. Documenting these conversations also allows MSPs to streamline any disagreements by showing what has been discussed and agreed upon. The goal is to become a trusted advisor that they turn to for guidance.
A next-level approach to proactivity is also a plus. This includes setting up alerts to rapidly identify issues and putting new measures in place to ensure mistakes don’t repeat themselves.
Transitioning toward a more risk-based approach, bolstered by a security-first mindset, will go a long way, opening doors for both more recurring and non-recurring revenue streams as clients seek out your consultation. The best MSPs are experts at assessing their customers’ environment and developing a tailored plan that covers governance, compliance, and ongoing risk management. What’s more, they adjust their approach regularly to reflect the ever-changing security needs of their clients—offering more opportunities to showcase their value and up their revenue stream.
The Impact of Cloud Computing
While MSP revenue is rising, profit margins are actually shrinking. Part of the problem is the fact that MSPs are expanding their portfolio of services, yet still relying on their former pricing structures. But many MSPs are making the problem worse by choosing the wrong cloud service vendor to partner with, which can significantly impact an MSPs already-shrinking profit margins.
Some cloud service vendors are simply not priced to support an MSP. And with the pace at which cloud technology is evolving, a process that was cutting-edge when an MSP implemented it could become inefficient within a period of weeks. It’s vital that MSPs be open to change if a vendor becomes unsustainable, lest risk their own services becoming unsustainable as a result.
You should also be ready to address any cloud-related questions and concerns that clients raise. Cloud technology is still relatively new, and it can be confusing, so overcoming any uncertainties will play a key role in an MSP’s ability to act as a valuable advisor to its clients.
How MSPs Use Software
Just as they bring value to their customers by streamlining workflows and protecting networks, MSPs need internal frameworks that increase efficiency.
Professional services automation (PSA) tools allow MSPs to streamline and automate repetitive administrative tasks. This saves time and cuts costs, all while enabling greater scalability.
MSPs can also utilize remote monitoring and management (RMM) tools. These automate the patching process and allow you to reduce time spent on resolving tickets, essentially doing more with less. Not only does this enable a more proactive approach, but it puts time back into the support team’s day to focus on other things.
Needless to say, MSPs should be easily accessible to their clients via technology. Remote desktop support makes that possible. With remote control over a client’s systems, MSPs can rapidly solve issues from wherever they are—without interfering with the end user’s access. This reduces customer downtime, allowing repairs and IT support to happen quietly in the background.
What the Future Holds for MSPs
The role of MSPs is changing. Keeping an eye on these emerging trends can help you anticipate shifting client expectations—and stay ahead of the curve.
Arguably the largest area of opportunity for MSPs is cybersecurity—and that service is only going to grow more valuable. Even as awareness increases and regulations tighten around data privacy laws, the number and complexity of cyberattacks continue to rise. Between 2017 and 2018, the annual cost of combating cybercrime rose by 12%—from $11.7 million to a record high of $13 million—so establishing yourself as a cybersecurity expert now will put you in good stead for the future.
The Internet of Things (IoT) is also going to have a major impact on MSPs. Keeping up with the sheer volume of devices being used on a day-to-day basis requires a dynamic approach to systems management. This includes being proactive about establishing best practices and security guidelines around new technology, such as the use of voice assistants.
Business intelligence offerings are also likely to grow in demand. With the use of IT in business at an all-time high, the amount of data being generated is enormous. But data is only numbers without someone to effectively consolidate and analyze it to extract actionable insights. Providing easy access to reports and KPIs that clearly demonstrate areas for improvement will allow MSPs to not only stay relevant in this data-driven market but become leaders in their field.
http://www.netcal.com/wp-content/uploads/2019/05/140526-Simple-Steps-to-Improving-Your-Service-Delivery.png3001001Conal Mullanhttp://www.netcal.com/wp-content/uploads/2015/11/netcal_logo2.gifConal Mullan2019-05-30 08:00:532019-05-30 08:00:54Managed Services 101: Where MSPs Are Now, and Where They’re Going
It’s a familiar story in tech: new technologies and shifting preferences raise new security challenges. One of the most pressing challenges today involves monitoring and securing all of the applications and data currently undergoing a mass migration to public and private cloud platforms.
Malicious actors are motivated to compromise and control cloud-hosted resources because they can gain access to significant computing power through this attack vector. These resources can then be exploited for a number of criminal money-making schemes, including cryptomining, DDoS extortion, ransomware and phishing campaigns, spam relay, and for issuing botnet command-and-control instructions. For these reasons—and because so much critical and sensitive data is migrating to cloud platforms—it’s essential that talented and well-resourced security teams focus their efforts on cloud security.
The cybersecurity risks associated with cloud infrastructure generally mirror the risks that have been facing businesses online for years: malware, phishing, etc. A common misconception is that compromised cloud services have a less severe impact than more traditional, on-premise compromises. That misunderstanding leads some administrators and operations teams to cut corners when it comes to the security of their cloud infrastructure. In other cases, there is a naïve belief that cloud hosting providers will provide the necessary security for their cloud-hosted services.
Although many of the leading cloud service providers are beginning to build more comprehensive and advanced security offerings into their platforms (often as extra-cost options), cloud-hosted services still require the same level of risk management, ongoing monitoring, upgrades, backups, and maintenance as traditional infrastructure. For example, in a cloud environment, egress filtering is often neglected. But, when egress filtering is invested in, it can foil a number of attacks on its own, particularly when combined with a proven web classification and reputation service. The same is true of management access controls, two-factor authentication, patch management, backups, and SOC monitoring. Web application firewalls, backed by commercial-grade IP reputation services, are another often overlooked layer of protection for cloud services.
Many midsize and large enterprises are starting to look to the cloud for new wide-area network (WAN) options. Again, here lies a great opportunity to enhance the security of your WAN, whilst also achieving the scalability, flexibility, and cost-saving outcomes that are often the primary goals of such projects. When selecting these types of solutions, it’s important to look at the integrated security options offered by vendors.
Haste makes waste
Another danger of the cloud is the ease and speed of deployment. This can lead to rapidly prototyped solutions being brought into service without adequate oversight from security teams. It can also lead to complacency, as the knowledge that a compromised host can be replaced in seconds may lead some to invest less in upfront protection. But it’s critical that all infrastructure components are properly protected and maintained because attacks are now so highly automated that significant damage can be done in a very short period of time. This applies both to the target of the attack itself and in the form of collateral damage, as the compromised servers are used to stage further attacks.
Finally, the utilitarian value of the cloud is also what leads to its higher risk exposure, since users are focused on a particular outcome (e.g. storage) and processing of large volumes of data at high speeds. Their solutions-based focus may not accommodate a comprehensive end-to-end security strategy well. The dynamic pressures of business must be supported by newer and more dynamic approaches to security that ensure the speed of deployment for applications can be matched by automated SecOps deployments and engagements.
Time for action
If you haven’t recently had a review of how you are securing your resources in the cloud, perhaps now is a good time. Consider what’s allowed in and out of all your infrastructure and how you retake control. Ensure that the solutions you are considering have integrated, actionable threat intelligence for another layer of defense in this dynamic threat environment.
This article was provided by our service partner : webroot.com
http://www.netcal.com/wp-content/uploads/2019/05/blog-800x400-cloud-target-1.jpg400800Conal Mullanhttp://www.netcal.com/wp-content/uploads/2015/11/netcal_logo2.gifConal Mullan2019-05-16 07:05:572019-05-16 07:05:58Cloud Services in the Crosshairs of Cybercrime
It is no secret anymore, you need a backup for Microsoft Office 365! While Microsoft is responsible for the infrastructure and its availability, you are responsible for the data as it is your data. And to fully protect it, you need a backup. It is the individual company’s responsibility to be in control of their data and meet the needs of compliance and legal requirements. In addition to having an extra copy of your data in case of accidental deletion, here are five more reasons WHY you need a backup.
With that quick overview out of the way, let’s dive straight into the new features.
Increased backup speeds from minutes to seconds
With the release of Veeam Backup for Microsoft Office 365 v2, Veeam added support for protecting SharePoint and OneDrive for Business data. Now with v3, we are improving the backup speed of SharePoint Online and OneDrive for Business incremental backups by integrating with the native Change API for Microsoft Office 365. By doing so, this speeds up backup times up to 30 times which is a huge game changer! The feedback we have seen so far is amazing and we are convinced you will see the difference as well.
Improved security with multi-factor authentication support
Multi-factor authentication is an extra layer of security with multiple verification methods for an Office 365 user account. As multi-factor authentication is the baseline security policy for Azure Active Directory and Office 365, Veeam Backup for Microsoft Office 365 v3 adds support for it. This capability allows Veeam Backup for Microsoft Office 365 v3 to connect to Office 365 securely by leveraging a custom application in Azure Active Directory along with MFA-enabled service account with its app password to create secure backups.
From a restore point of view, this will also allow you to perform secure restores to Office 365.
Veeam Backup for Microsoft Office 365 v3 will still support basic authentication, however, using multi-factor authentication is advised.
By adding Office 365 data protection reports, Veeam Backup for Microsoft Office 365 will allow you to identify unprotected Office 365 user mailboxes as well as manage license and storage usage. Three reports are available via the GUI (as well as PowerShell and RESTful API).
License Overview report gives insight in your license usage. It shows detailed information on licenses used for each protected user within the organization. As a Service Provider, you will be able to identify the top five tenants by license usage and bring the license consumption under control.
Storage Consumption report shows how much storage is consumed by the repositories of the selected organization. It will give insight on the top-consuming repositories and assist you with daily change rate and growth of your Office 365 backup data per repository.
Mailbox Protection report shows information on all protected and unprotected mailboxes helping you maintain visibility of all your business-critical Office 365 mailboxes. As a Service Provider, you will especially benefit from the flexibility of generating this report either for all tenant organizations in the scope or a selected tenant organization only.
Simplified management for larger environments
Microsoft’s Extensible Storage Engine has a file size limit of 64 TB per year. The workaround for this, for larger environments, was to create multiple repositories. Starting with v3, this limitation and the manual workaround is eliminated! Veeam’s storage repositories are intelligent enough to know when you are about to hit a file size limit, and automatically scale out the repository, eliminating this file size limit issue. The extra databases will be easy to identify by their numerical order, should you need it:
Flexible retention options
Before v3, the only available retention policy was based on items age, meaning Veeam Backup for Microsoft Office 365 backed up and stored the Office 365 data (Exchange, OneDrive and SharePoint items) which was created or modified within the defined retention period.
Item-level retention works similar to how classic document archive works:
First run: We collect ALL items that are younger (attribute used is the change date) than the chosen retention (importantly, this could mean that not ALL items are taken).
Following runs: We collect ALL items that have been created or modified (again, attribute used is the change date) since the previous run.
Retention processing: Happens at the chosen time interval and removes all items where the change date became older than the chosen retention.
This retention type is particularly useful when you want to make sure you don’t store content for longer than the required retention time, which can be important for legal reasons.
Starting with Veeam Backup for Microsoft Office 365 v3, you can also leverage a “snapshot-based” retention type option. Within the repository settings, v3 offers two options to choose from: Item-level retention (existing retention approach) and Snapshot-based retention (new).
Snapshot-based retention works similar to image-level backups that many Veeam customers are so used to:
First run: We collect ALL items no matter what the change date is. Thus, the first backup is an exact copy (snapshot) of an Exchange mailbox / OneDrive account / SharePoint site state as it looks at that point in time.
Following runs: We collect ALL new items that have been created or modified (attribute used here is the change date) since the previous run. Which means that the backup represents again an exact copy (snapshot) of the mailbox/site/folder state as it looks at that point in time.
Retention processing: During clean-up, we will remove all items belonging to snapshots of mailbox/site/folder that are older than the retention period.
Retention is a global setting per repository. Also note that once you set your retention option, you will not be able to change it.
As Microsoft released new major versions for both Exchange and SharePoint, we have added support for Exchange and SharePoint 2019. We have made a change to the interface and now support internet proxies. This was already possible in previous versions by leveraging a change to the XML configuration, however, starting from Veeam Backup for Microsoft Office 365 v3, it is now an option within the GUI. As an extra, you can even configure an internet proxy per any of your Veeam Backup for Microsoft Office 365 remote proxies. All of these new options are also available via PowerShell and the RESTful API for all the automation lovers out there.
On the point of license capabilities, we have added two new options as well:
Revoking an unneeded license is now available via PowerShell
Service Providers can gather license and repository information per tenant via PowerShell and the RESTful API and create custom reports
To keep a clean view on the Veeam Backup for Microsoft Office 365 console, Service Providers can now give organizations a custom name.
Based upon feature requests, starting with Veeam Backup for Microsoft Office 365 v3, it is possible to exclude or include specific OneDrive for Business folders per job. This feature is available via PowerShell or RESTful API. Go to the What’s New page for a full list of all the new capabilities in Veeam Backup for Microsoft Office 365.
This article was supplied by our service partner : veeam.com