Security risk

Why You Shouldn’t Share Security Risk

There are some things in life that would be unfathomable to share. Your toothbrush, for example. We need to adopt the same clear distinction with cybersecurity risk ownership as we do with our toothbrush.

You value sharing as a good characteristic. However, even if you live with other people, everyone in your household still has their own toothbrush. It’s very clear which toothbrush is yours and which toothbrush is your partner’s/spouse’s or your children’s.

At some point in our lives, we were taught that toothbrushes should not be shared, and we pass that knowledge down to our children and dependents and make sure they also know. The same type of education about not sharing cybersecurity risks needs to happen. By not defining risk ownership, you’re sharing it with your customers.

Why Risk Should Never Be Shared

There should be no such thing as shared risk. It is very binary. Either the customer owns it, or you own it. Setting the correct expectation of an MSP’s cybersecurity and risk responsibility is critical to keeping a long-term business relationship.

When a breach occurs is not the time to be wondering which side is at fault. Notice I said ‘when’ not ‘if.’ Nearly 70% of SMBs have already experienced a cyberattack, with 58% of SMBs experiencing a cybersecurity attack within the past year—costing these companies an average of $400,000. The last thing you need is to be on the hook for a potentially business-crippling event. You need to limit your liability.

What Are Your Cybersecurity Risk Management Options?

1. Accept the Risk

When an organization accepts the risk, they have identified and logged the risk, but don’t take any action to remediate it. This is an appropriate action when the risk aligns with the organization’s risk tolerance, meaning they are willing to leave the risk unaddressed as a part of their normal business operations.

There is no set severity to the risk that an organization is willing to accept. Depending on the situation, organizations can accept risk that is low, moderate, or high.

When an organization decides to accept the risk, they have identified and logged the risk, but don’t take any action to remediate it. This is an appropriate action when the risk fits into the organization’s risk tolerance, and there is no set severity to the risk. Meaning, depending on the situation, an organization could be willing to accept low, moderate, or even high risk.

Here are two examples:

An organization has data centers located in the northeastern part of the United States and accept the risk of earthquakes. They know that an earthquake is possible but decide not to put the money into addressing the risk due to the infrequency of earthquakes in that area.

On the other end of the risk spectrum, a federal agency might share classified information with first responders who don’t typically have access to that information to stop an impending attack.

Many factors go into an organization accepting risk, including the organization’s overall mission, business needs, and potential impact on individuals, other organizations, and the Nation.1

2. Transfer the Risk

Transferring risk means just that; an organization passing the identified risk onto another entity. This action is appropriate when the organization has both the desire and the means to transfer the risk. As an MSP, you make a recommendation to a customer and they want you to do something, they’ve transferred the risk to you in exchange for payment for your products and service.

Transferring risk does not reduce the likelihood of an attack or incident occurring or the consequences associated with the risk.2

3. Mitigate the Risk

When mitigating risk, measures are put in place to address the risk. It’s appropriate when the risk cannot be accepted, avoided, or transferred. Mitigating risk depends on the risk management tier, the scope of the response, and the organization’s risk management strategy.

Organizations can approach risk mitigation in a variety of ways across three tiers:

  • Tier 1 can include common security controls
  • Tier 2 can introduce process re-engineering
  • Tier 3 can be a combination of new or enhanced management, operational, or technical safeguards

An organization could put this into practice by, for example, prohibiting the use or transport of mobile devices to certain parts of the world.3

4. Avoid the Risk (Not Recommended)

Risk avoidance is the opposite of risk acceptance because it’s an all-or-nothing kind of stance. For example, cutting down a tree limb hanging over your driveway, rather than waiting for it to fall, would be risk avoidance. You would be avoiding the risk of the tree limb falling on your car, your house, or on a passerby. Most insurance companies, in this example, would accept the risk and wait for the limb to fall, knowing that they can likely avoid incurring that cost. However, the point is that risk avoidance means taking steps so that the risk is completely addressed and cannot occur.

In business continuity and disaster recovery plans, risk avoidance is the action that avoids any exposure to the risk whatsoever. If you want to avoid data loss, you have a fully redundant data center in another geographical location that is completely capable of running your entire organization from that location. That would be complete avoidance of any local disaster such as an earthquake or hurricane.

While risk avoidance reduces the cost of downtime and recovery and may seem like a safer bet, it is usually the most expensive of all risk mitigation strategies. Not to mention it’s simply no longer feasible to rely on risk avoidance in today’s society with increasingly sophisticated cyberattacks.4

By using a risk assessment report to identify risk, you can establish a new baseline of the services you are and are not covering. This will put the responsibility onto your customers to either accept or refuse your recommendations to address the risk.

Summary

There are many different options when it comes to dealing with risks to your business. The important thing is to know what risks you have, how you are going to manage those risks, and who owns those risks. Candid discussions with your customers, once you know and understand the risks, is the only true way for each of you to know who owns the risks and what risk management option is going to be put in place for those risks. Don’t be afraid to have these conversations. In the long run, it will lead to outcomes which will be best for both you and your customers.


This article was provided by our service partner : Connectwise

healthcare backup

Healthcare backup vs record retention

Healthcare overspends on long term backup retention

There is a dramatic range of perspective on how long hospitals should keep their backups: some keep theirs for 30 days while others keep their backups forever. Many assume the long retention is due to regulatory requirements, but that is not actually the case. Retention times longer than needed have significant cost implications and lead to capital spending 50-70% higher than necessary. At a time when hospitals are concerned with optimization and cost reduction across the board, this is a topic that merits further exploration and inspection.

Based on research to date and a review of all relevant regulations, we find:

  • There is no additional value in backups older than 90 days.
  • Significant savings can be achieved through reduced backup retention of 60-90 days.
  • Longer backup retention times impose unnecessary capital costs by as much as 70% and hinder migration to more cost-effective architectures.
  • Email retention can be greatly shortened to reduce liability and cost through set policy.

Let’s explore these points in more details.

What are the relevant regulations?

HIPAA mandates that Covered Entities and Business Associates have backup and recovery procedures for Patient Health Information (PHI) to avoid loss of data. Nothing regarding duration is specified (CFR 164.306CFR 164.308). State regulations govern how long PHI must be retained, usually ranging from six to 25 years, sometimes longer.

The retention regulations refer to the PHI records themselves, not the backups thereof. This is an important distinction and a source of confusion and debate. In the absence of deeper understanding, hospitals often opt for long term backup retention, which has significant cost implications without commensurate value.

How do we translate applicable regulations into policy?

There are actually two policies at play: PHI retention and Backup retention. PHI retention should be the responsibility of data governance and/or application data owners. Backup retention is IT policy that governs the recoverability of systems and data.

I have yet to encounter a hospital that actively purges PHI when permitted by regulations. There’s good reason not to: older records still have value as part of analytics datasets but only if they are present in live systems. If PHI is never purged, records in backups from one year ago will also be present in backups from last night. So, what value exists in the backups from one year ago, or even six months ago?

Keeping backups long term increases the capital requirements, complexity of data protection systems, and limits hospitals’ abilities to transition to new data protection architectures that offer a lower TCO, all without mitigating additional risk or adding additional value.

What is the right backup retention period for hospital systems?

Most agree that the right answer is 60-90 days. Thirty days may expose some risk from undesirable system changes that require going further back at the system (if not the data) level; examples given include changes that later caused a boot error. Beyond 90 days, it’s very difficult to identify scenarios where the data or systems would be valuable.

What about legacy applications?

Most hospitals have a list of legacy applications that contain older PHI that was not imported into the current primary EMR system or other replacement application. The applications exist purely for reference purposes, and they often have other challenges such as legacy operating systems and lack of support, which increases risk.

For PHI that only exists in legacy systems, we have only two choices: keep those aging apps in service or migrate those records to a more modern platform that replicates the interfaces and data structures. Hospitals that have pursued this path have been very successful reducing risk by decommissioning legacy applications, using solutions from HarmonyMediquantCITI, and Legacy Data Access.

What about email?

Hospitals have a great deal of freedom to define their email policies. Most agree that PHI should not be in email and actively prevent it by policy and process. Without PHI in email, each hospital can define whatever email retention policy they wish.

Most hospitals do not restrict how long emails can be retained, though many do restrict the ultimate size of user mailboxes. There is a trend, however, often led by legal to reduce the history of email. It is often phased in gradually: one year they will cut off the email history at ten years, then to eight or six and so on.

It takes a great deal of collaboration and unity among senior leaders to effect such changes, but the objectives align the interests of legal, finance, and IT. Legal reduces discoverable information; finance reduces cost and risk; and IT reduces the complexity and weight of infrastructure.

The shortest email history I have encountered is two years at a Detroit health system: once an item in a user mailbox reaches two years old, it is actively removed from the system by policy. They also only keep their backups for 30 days. They are the leanest healthcare data protection architecture I have yet encountered.

Closing thoughts

It is fascinating that hospitals serving the same customer needs bound by vastly similar regulatory requirements come to such different conclusions about backup retention. That should be a signal that there is real optimization potential both with PHI and email. You can also consider Foresee Medical and learn about this healthcare software.


This article was provided by our service partner : veeam.com

Understand the Language of Cybersecurity

When you get started working around cybersecurity, it can sound like people are speaking a foreign language. Like most of the IT industry, cybersecurity has a language of its own. We’ve all become familiar with the basic security terms and aspects when we secure our personal data and information, but when you go deeper into the rabbit hole, the more technical things can get.

Let’s go over some commonly used terms you’ll hear so you can talk the talk when it comes to cybersecurity.

Antivirus / Anti-malware

A program that monitors a computer or network to detect or identify major types of malicious code and to prevent or contain malware incidents, sometimes by removing or neutralizing the malicious code.1

Chief Information Security Officer (CISO)

A senior-level executive who’s responsible for developing and implementing an information security program which includes procedures and policies designed to protect enterprise communications, systems, and assets from both internal and external threats. The CISO may also work alongside the Chief Information Officer (CIO) to procure cybersecurity products and services and to manage disaster recovery and business continuity plans.

The CISO may also be referred to as the chief security architect, the security manager, the corporate security officer, or the information security manager, depending on the company’s structure and existing titles. While the CISO is also responsible for the overall corporate security of the company, which includes its employees and facilities, he or she may simply be called the Chief Security Officer (CSO).2

Continuous Monitoring

A risk management approach to cybersecurity that maintains an accurate picture of an agency’s security risk posture, provides visibility into assets, and leverages use of automated data feeds to quantify risk, ensure effectiveness of security controls, and implement prioritized remedies.

Controls

Safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets.

Cybersecurity

The activity or process, ability or capability, or state whereby information and communications systems and the information contained therein are protected from and/or defended against damage, unauthorized use or modification, or exploitation. If you’re interested in protecting your business data, click here now to find more information.

Cybersecurity Framework

An IT security framework is a series of documented processes used to define policies and procedures around the implementation and ongoing management of information security controls. These frameworks are basically a blueprint for building an information security program to manage risk and reduce vulnerabilities. Information security pros can utilize these frameworks to define and prioritize the tasks required to build security into an organization. You can get 24×7 monitoring service to prevent cybersecurity incidents from happening in the first place, learn more about this service at Nettitude website.

Data Breach

The unauthorized movement or disclosure of sensitive information to a party, usually outside the organization, that is not authorized to have or see the information.5

Data Exfiltration

The unauthorized transfer of data from a computer, attached device, or network. Such a transfer may be manual and carried out by someone with physical access to a computer, or it may be automated and carried out through malicious programming over a network.

Data Loss Prevention

A set of procedures and mechanisms to stop sensitive data from leaving a security boundary.6

Data Protection/Insider Threat

Data protection places emphasis on data as an asset that has a value assigned. Think about intellectual property, trade secrets, personally identifiable information (PII), personal health information (PHI), credit card, or financial information as an example. This IS the last layer of defense. Activities include data classification, data loss prevention (DLP), data masking, or de-identification.

Endpoint Protection

Relates to all manners of protection regarding the operating systems, applications, connections, and behavior of an endpoint such as a laptop, desktop, mobile device, or server. This is one of the last layers of defense. Activities include antivirus, anti-malware, operating system/application hardening, configuration management, email/web filtering, access control, patching, and monitoring.

Exposure

The condition of being unprotected, thereby allowing access to information or access to capabilities that an attacker can use to enter a system or network.7

Firewall

A capability to limit network traffic between networks and/or information systems.

Extended Definition: A hardware/software device or a software program that limits network traffic according to a set of rules of what access is and is not allowed or authorized.8

Governance

An umbrella approach referring to a company’s posture towards governance, risk, and compliance. This includes the rules of the road and guidance that the company follows. These activities are foundational and provide meaning and direction to the following items: security policies and procedures, training and awareness, risk and vulnerability assessment, and penetration testing along with providing metrics as to where a company is on a risk and maturity scale as well as trends showing progress.

Incident

An occurrence that actually or potentially results in adverse consequences, adverse effects on or poses a threat to an information system or the information that the system processes, stores, or transmits and that may require a response action to mitigate the consequences.

Extended Definition: An occurrence that constitutes a violation or imminent threat of violation of security policies, security procedures, or acceptable use policies.9

Incident Response

Activities related to how an organization prepares, trains, and coordinates response to assumed or confirmed security incidents that have a material impact of the corporate business strategy, as well as impacts to employees or business partners. Incident response in action includes the following activities: monitoring, incident identification and triage, remediation, restore, and recovery activities (designed to restore the company to normal operations). In the SMB, space this may include Business Continuity and Disaster Recovery.

Log Collection

Log collection is the heart and soul of a SIEM. The more log sources that send logs to the SIEM, the more can be accomplished with the SIEM.10

Log Management

The National Institute for Standards and Technology (NIST) defines log management in Special Publication SP800-92 as: “the process for generating, transmitting, storing, analyzing, and disposing of computer security log data.”

Log management is defining what you need to log, how it’s logged, and how long to retain the information. This ultimately translates into requirements for hardware, software, and of course, policies.11

Malware

Software that compromises the operation of a system by performing an unauthorized function or process.12

Synonym(s): malicious code, malicious applet, malicious logic

Multi-Factor Authentication (MFA)

A security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.13

The National Institute of Standards and Technology (NIST)

The National Institute of Standards and Technology (NIST) was founded in 1901 and is now part of the U.S. Department of Commerce. NIST is one of the nation’s oldest physical science laboratories. The organization’s mission is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.

Phishing

A digital form of social engineering to deceive individuals into providing sensitive information.14

Risk Assessment

The product or process which collects information and assigns values to risks for the purpose of informing priorities, developing or comparing courses of action, and informing decision making.

Extended Definition: The appraisal of the risks facing an entity, asset, system, or network, organizational operations, individuals, geographic area, other organizations, or society, and includes determining the extent to which adverse circumstances or events could result in harmful consequences.15

Risk Management

The process of identifying, analyzing, assessing, and communicating risk and accepting, avoiding, transferring, or controlling it to an acceptable level considering associated costs and benefits of any actions taken.

Extended Definition: Includes 1) conducting a risk assessment; 2) implementing strategies to mitigate risks; 3) continuous monitoring of risk over time; and 4) documenting the overall risk management program.16

Security Information and Event Management (SIEM)

SIEM became the generalized term for managing information generated from security controls and infrastructure. It is essentially a management layer above your existing systems and security controls. SIEM connects and unifies information from disparate systems, allowing them to be analyzed and cross-referenced from a single interface.17 To know more about what SIEM is, visit this webpage.

Security Operations Center (SOC)

A security operations center (SOC) is a facility that houses an information security team responsible for monitoring and analyzing an organization’s security posture on an ongoing basis. The SOC team’s goal is to detect, analyze, and respond to cybersecurity incidents using a combination of technology solutions and a strong set of processes.18

Single Sign-On (SSO)

Single sign-on (SSO) is a session and user authentication service that permits an end user to enter one set of login credentials (such as a name and password) and be able to access multiple applications.19

Weakness

A shortcoming or imperfection in software code, design, architecture, or deployment that, under proper conditions, could become a vulnerability or contribute to the introduction of vulnerabilities.20

Now that you have a better understanding of cybersecurity terms and phrases you’ll hear around the industry, share them with your customers so you’ll start speaking a common language.

References

1, 3, 5, 6, 7, 8, 9, 12, 14, 15, 16, 20 Explore Terms: A Glossary of Common Cybersecurity Terminology
Retrieved from https://niccs.us-cert.gov/about-niccs/glossary

2 Rouse, M (December 2016) CISO (Chief Information Security Officer) 
Retrieved from https://searchsecurity.techtarget.com/definition/CISO-chief-information-security-officer

4 Granneman, J (May 2019) Top 7 IT Security Frameworks and Standards Explained) 
Retrieved from https://searchsecurity.techtarget.com/tip/IT-security-frameworks-and-standards-Choosing-the-right-one

10 Constantine, C (December 2018) Standards and Best Practices for SIEM Logging 
Retrieved from https://www.alienvault.com/blogs/security-essentials/what-kind-of-logs-for-effective-siem-implementation

11 Torre, D (October 2010) What Is Log Management and How to Choose the Right Tools 
Retrieved from https://www.csoonline.com/article/2126060/network-security-what-is-log-management-and-how-to-choose-the-right-tools.html

13 Rouse, M (March 2015) Multifactor Authentication (MFA) 
Retrieved from https://searchsecurity.techtarget.com/definition/multifactor-authentication-MFA

17 Constantine, C (March 2014) SIEM and Log Management—Everything You Need to Know but Were Afraid to Ask, Part 1 
Retrieved from https://www.alienvault.com/blogs/security-essentials/everything-you-wanted-to-know-about-siem-and-log-management-but-were-afraid

18 Lord, N (July 2015) What Is a Security Operations Center (SOC)? 
Retrieved from https://digitalguardian.com/blog/what-security-operations-center-soc

19 Rouse, M (June 2019) Single Sign-On (SSO) 
Retrieved from https://searchsecurity.techtarget.com/definition/single-sign-on

Security risk

How MSPs Can Reduce Their Security Risk

While technology improves our lives in so many ways, it certainly isn’t free from drawbacks. And one of the biggest drawbacks is the risk of cyberattacks—a risk that’s escalating every day.

To reduce the increasing risk of cyberattacks—to your customers and your MSP business—it’s essential to put protocols in place to strengthen your internal security (we often refer to this as ‘getting your house in order’) and protect your clients. The truth is, your customers automatically assume that security is integrated into the price of their contract. That means you need to educate them on the subject, or risk falling short of their (potentially unrealistic) expectations.

What’s more, this is a prime opportunity to offer additional services—and increase revenue.

“You don’t want to deliver security services and not have the client invest in those services,” explains George Mach, Founder and CEO of Apex IT Group. “It would impact your MSP in a negative way.”

In our Path to Success Security Spotlight, I sat down with George Mach to discuss how you can define, identify, and reduce your level of risk, and boost revenue as a result. Here are just a few of our tips.

Understand Your Risk

The first step to reducing risk and providing Security-as-a-Service is understanding the current state of your MSP’s security.

“If you don’t know your own gaps or have good security hygiene in your own MSP, it’s really hard to deliver world-class security services to your client,” Mach says.

As an MSP, you have access to a wealth of sensitive information about your clients, including their passwords, addresses, and names. As such, it’s crucial that your MSP is fully protected. Even the smallest data breach could cause your clients to lose trust in you—damaging your reputation and costing you their business.

Trust, Train & Protect Your House

To protect your MSP (and by extension, your clients), Mach recommends following three simple steps.

First, make sure that you only hire trustworthy people. Of course, it isn’t always easy to spot a wolf in sheep’s clothing, but there are a few measures you can take to safeguard your organization against harmful presences. During the hiring process, this could include conducting a background check and verifying a candidate’s education and employment history. You can also consider creating new onboarding policies and asking employees to sign agreements that go on file, holding them accountable to specific standards.

Secondly, it’s important to train everyone at your organization about how to detect potential scammers—including staff in non-technical positions. As part of this training, you may also want to conduct a security skills assessment and record that it has taken place. That way, should the worst happen and a client decides to sue following a security breach, you can prove the measures your company took to try and prevent it—helping protect your reputation.

“The goal is to be in a defensible position if something were to happen,” Mach says.

Thirdly, it’s essential to enforce technical, physical, and administrative controls at your organization. Firewalls and endpoint protection are a must. Investing in swipe cards or biometric scanners can also help you strengthen your protection by helping you identify every person who enters your building. And to reduce your legal risk, don’t overlook the importance of nondisclosure agreements (NDAs) and business associate agreements (BAAs).

Follow the Framework

Once you’ve increased security at your MSP, you can start thinking about how to offer Security-as-a-Service. Following the protocols outlined in the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework is a good place to start. These protocols are: identify, protect, detect, respond, and recover.

By following these protocols, your company can turn secure protection into a competitive advantage. But that’s only possible if you communicate it properly to your clients.

Throughout conversations with your clients, it’s crucial to gain an understanding of their security priorities and the metrics they use to determine their success. Once you’ve identified these factors, you can establish risk thresholds that are closely aligned with your client’s risk tolerance.

Benchmarking your clients’ level of risk against industry standards and using a weighted scoring system to rank it from high to low can make it easier to communicate the value of your services to them—and the impact you’ll have on their business.

Measure Risk Reduction—Then Market It

You can use two approaches to measure risk reduction.

The quantitative approach, which is more technical, considers a server’s asset value, its exposure factor (which takes into account how often the server is left unattended and whether that server is in a protected environment), and the loss expectancy, which is related to the rate of occurrence of various risks. Taking all these factors into account, you can more accurately price your services—and your clients can make a more informed decision about whether to live with the risk or do something to mitigate it.

The qualitative approach is less complex. It uses available data to calculate the likelihood of a risk. You can then suggest countermeasures to ensure protection.

Whichever approach you choose, explaining your findings and suggested solutions in layman’s terms and backing up your claims with evidence helps to build trust with your clients.

It’s this trust that will persuade clients to invest in your security service—and remain satisfied customers for years to come.


This article was provided by our service partner : Connectwise

DNS Security – Your New Secret Weapon in The Fight Against Cybercrime

It’s time to use the internet to your security advantage. Did you know more than 91% of malware uses DNS to gain command and control, exfiltrate data, or redirect web traffic?

But when internet requests are resolved by a recursive DNS service, they become the perfect place to check for and block malicious or inappropriate domains and IPs. DNS is one of the most valuable sources of data within an organization. It should be mined regularly and cross-referenced against threat intelligence. It’s easier to do than you might think. Security teams that are not monitoring DNS for indications of compromise are missing an important opportunity.

Don’t believe us? New analysis shows widespread DNS protection could save organizations as much as $200 billion in losses every year. Check out the full report  The Economic Value of DNS Security,” recently published by the Global Cyber Alliance (GCA). According to their findings, DNS firewalls could prevent between $19 billion and $37 billion in annual losses in the US and between $150 billion and $200 billion in losses globally. That’s a lot of bang for your buck. If organizations around the globe were to make this simple addition to their security stack, the savings could add up into billions of dollars.  Translation: an easy way to prevent one-third of total losses due to cybercrime.

About Cisco Umbrella

Cisco Umbrella uses the internet’s infrastructure to stop threats over all ports and protocols before it reaches your endpoints or network. Using statistical and machine learning models to uncover both known and emerging threats, Umbrella proactively blocks connections to malicious destinations at the DNS and IP layers. And because DNS is a protocol used by all devices that connect to the internet, you simply point your DNS to the Umbrella global network, and any device that joins your network is protected. So when your users roam, your network stays secure.


This article was provided by our service partner : Cisco Umbrella

Offering Security Services: Should You Build, Buy, or Partner?

One size DOES NOT fit all.

Let’s consider the ‘build, buy, partner’ framework for security services, which offers three very different approaches you could take. There is no absolute right or wrong way, only what is best for your business. Explore the pros and cons of each so you can determine the right way for you.

Building Security

Utilizing this approach means you create/develop the solution with the resources you own, control, or contract to.

Strategy of Things gives us deeper insight into what is required to pull this off.

When to consider this approach:

  • You have the requisite skill sets and resources to do it
  • You can offer security faster, cheaper, and at lower risk
  • This is a strategic competence you own or want to own
  • There is strategic knowledge or critical intellectual property to protect
  • You are fully committed throughout the company

Pros

  • Most product control
  • Most profit opportunity

Cons

  • Longest time to market
  • High development cost

The Challenge: Hiring security resources to monitor 24/7 (emphasis on 24/7)

According to PayScale, the average salary for a cybersecurity analyst is $75,924. How much revenue would you need to earn to bring on just one analyst? Security talent is a hot commodity. Even if you can hire them, keeping them on will be a challenge when you’re fighting bigger businesses or one that specializes in cybersecurity who will pay more and offer more benefits.

Buying Security

This approach could also be referred to as ‘acquiring’ where you are seeking to acquire another company that specializes in a particular area (for example cybersecurity or physical security) to get the missing skill set you’re looking for under your umbrella.

Let’s take a look at the requirements needed for this approach courtesy of Strategy of Things.

When to consider this approach:

  • You don’t have the skills or resources to build, maintain, and support security
  • There is some or all of a solution in the marketplace and no need to reinvent the wheel
  • Someone can do it faster, better, and cheaper
  • You want to focus limited resources in other areas that make more sense
  • Time is critical, and you want to get to market faster
  • There is a solution in the marketplace that gives you mostly what you want

Pros

  • Shortened time to market
  • Acquiring skill sets

Cons

  • Can be costly to acquire
  • Integration takes time

The Challenge: The MSP M&A market is hot, AND it’s a seller’s market
Jim Schleckser, CEO, Inc. CEO Project and author of Great CEOs Are Lazy states in an article on Inc.com, “Many acquisitions fail to live up to their financial or performance expectations because the acquiring company hasn’t done its proper homework.” Take the time to do some serious research on how to take advantage of a seller’s market and find the expertise you need for M&A success. We have a couple of webinars to help you get started:

Bonus for ConnectWise partners: We’re fully invested in helping you throughout the M&A process every step of the way, including technical assistance post-acquisition from our M&A specialist.

Partnering for Security

Strategy of Things gives us insight into this approach. Cybersecurity is a specialized field that many vendors cannot address on their own and must buy or license for their solution.

The company allies itself with a complementary solution or service provider to integrate and offer a joint solution. This option enables both companies to enter a market neither can alone, access to specialized knowledge neither has, and a faster time to market.

Companies consider this approach when neither party has the full offering to get to market on their own.

Pros

  • Shortest time to market
  • Each party brings specialized knowledge or capabilities, including technology, market access, and credibility
  • It lowers the cost, time, and risk to pursue new opportunities
  • Conserves resources
  • Opportunity to learn the skill set before building something of your own

Cons

  • Least control
  • Integration cost
  • Shared gross margins

Many vendors today offer a lot more flexibility today to make partnering an easy choice. A great example is Perch Security threat detection and response.

No matter where you are in your security journey, Perch enables you to choose your level of involvement:

  • Fully managed by Perch SOC

If you’re more of a ‘hands-off, I trust you to do your thing’ type of person/company, then you have the freedom to sit back and relax while the Perch team does their thing. They’ll only involve you when absolutely necessary and equip you with the tools to look good in front of the customer while they do all the heavy lifting.

  • Mostly managed by Perch SOC, your team reviewing or jumping in on specific issues

If you want to be aware on a high level of what’s going on in the world of threat detection but not to the level of fully geeking out, then this level of involvement is right up your alley and 100% possible with the Perch team. Get updates on the things you care about without being inundated with the things you don’t.

  • Fully manage alerts yourself

If you want to geek out on threat reports side by side with the Perch flock, you’re more than welcome to. If you have a person on your team that’s interested in security but not able to dedicate 100% of their time to it, feel free to carve out a portion of their daily responsibilities to working hand-in-hand with the Perch team. Should things change along the way, and you need more or less involvement, you’re free to leverage the Perch team as needed.

Conclusion

Security isn’t solved by one single tool. It’s an ongoing journey that requires continuous assessment and refinement. Everyone has to start somewhere, but keep in mind that the starting line for you might look different than the starting line for someone else, and that’s okay. Carefully review the options at your disposal and determine which path is best for you.

“The journey of a thousand miles begins with a single step.” Lao Tzu


This article was provided by our service partner Connectwise

vcenter server

Decoding the vCenter Server Lifecycle: Update and Versioning Explained

Have you ever wondered what the difference is between a vCenter Server update and a patch? Or between an upgrade and a migration? Why don’t some vCenter Server versions align? Keep reading for the answers!

Version Numbering

The first thing you should understand is vCenter Server versioning. When reviewing your vCenter Server version’s you may see many different references to versions or builds.

One of the first places you will notice a version identifier, is in our release notes. Here you will see the product version listed as vCenter Server 6.7 Update 2a and the build number listed as 13643870.


Once you have upgraded or deployed your vCenter Server you will see version identifiers such as 6.7.0.31000 listed in the VMware Appliance Management Interface (VAMI). You will also see a build number, such as 13643870.

If you review the version information within your vSphere Client you will see the version listed as 6.7.0 and the build as 13639324.

The reason you will see differing versions among these places are because the release notes show the vCenter Server build and full release name, in the VAMI it will show the vCenter Server Appliance version in addition to the build and in the vSphere Client it will show the vCenter Server version and the build of the vSphere Client.

KB2143838 is a great resource that will explain the breakdown of versioning and builds for all vCenter Server versions.

Now that we have  explained the way versioning works, let’s jump into the different scenarios where VMware will increment a version.

vCenter Server Updates and Patches

What is a vCenter Server Update and how does It differ from a patch?

A vCenter Server Update is one that applies to the vCenter Server application. An update can include new features, bug fixes or updates for additional functionality. vCenter Server updates will have a dedicated set of release notes and will be hosted on the my.vmware.com download portal.

A vCenter Server patch is more much streamlined as these are associated with operating system and security level updates. There are no application related changes, and these can target Photon OS, the Postgres DB, Java versions and any other supporting Linux libraries on the vCenter Server Appliance.

A vCenter Server patch also has no dedicated release notes as these are part of the rolled up VMware vCenter Server Appliance Photon OS Security Patches. Patches are also not stored on the my.vmware.com download portal but on the alternate VMware Patch Portal. It is also very important to note as listed in the release notes, these should not be used for any deployment or upgrade. The only reason the vCenter Server ISO’s are hosted on the VMware Patch Portal is to be used to restore your vCenter Server Appliance if using the built-in File-Based Backup. Patches can also only be applied within one and the same update release. So for example if you are currently on 6.7 Update 1 you would not be able to patch directly to 6.7 Update 2b , you would first update to 6.7 Update  2a and then patch to 6.7 Update 2b.

Now that we have explained the differences between a vCenter Server update and patch we can review the differences between an upgrade and migration.

vCenter Server Upgrades and Migrations

In its simplest form a vCenter Server Upgrade is defined as doing a major version change between vCenter Server Appliance versions. If you are running the vCenter Server Appliance 6.5  in your environment and move to vCenter Server Appliance 6.7 this would be considered an upgrade.

A vCenter Server migration is defined as doing a major version change between vCenter Server for Windows and the vCenter Server Appliance. If you are running vCenter Server for Windows 6.5 and move to the vCenter Server Appliance 6.7 this would be considered a migration. It is not supported to do a migration between the same major version as it consists of both a change of platform and an upgrade together.

In vSphere 6.5 and 6.7 an upgrade or migration of the vCenter Server is not completed in place. During the upgrade process a brand new appliance of the newer version is deployed, and based on the settings defined the data is exported from the old version and imported into the new one retaining the same FQDN, IP, Certs and UUIDs.

A back-in-time upgrade restriction is when you are unable to upgrade from one 6.5 release to another 6.7 release. For example, Upgrade from vSphere 6.5 Update 2d to vSphere 6.7 Update 1 is not supported due to the back-in-time nature of vSphere 6.7 Update 1. vSphere 6.5 Update 2d contains code and security fixes that are not in vSphere 6.7 Update 1 and might cause regression. When performing vCenter Server upgrades and migrations it’s also very important to pay attention to unsupported upgrade paths which are normally restricted due to being a back-in-time upgrade. It is also important to note that just because two releases might have the same release date, does not mean that they will be compatible. The best resource to review supported upgrade paths will be in the vCenter Server Release Notes section titled Upgrade Notes for this Release.

Resource Wrap-Up

 Conclusion

Versioning of a complex product can be difficult, but hopefully you now have a better understanding of what these numbers mean. If you have any questions feel free to post a comment below or check out any of the resources linked.


This article was provided by our service partner : Vmware

How to create a Failover Cluster in Windows Server 2019

This article gives a short overview of how to create a Microsoft Windows Failover Cluster (WFC) with Windows Server 2019 or 2016. The result will be a two-node cluster with one shared disk and a cluster compute resource (computer object in Active Directory).

Windows server 2019 failover cluster

Preparation

It does not matter whether you use physical or virtual machines, just make sure your technology is suitable for Windows clusters. Before you start, make sure you meet the following prerequisites:

Two Windows 2019 machines with the latest updates installed. The machines have at least two network interfaces: one for production traffic, one for cluster traffic. In my example, there are three network interfaces (one additional for iSCSI traffic). I prefer static IP addresses, but you can also use DHCP.

failover cluster 02

Join both servers to your Microsoft Active Directory domain and make sure that both servers see the shared storage device available in disk management. Don’t bring the disk online yet.

The next step before we can really start is to add the “Failover clustering” feature (Server Manager > add roles and features).

Reboot your server if required. As an alternative, you can also use the following PowerShell command:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

After a successful installation, the Failover Cluster Manager appears in the start menu in the Windows Administrative Tools.

After you installed the Failover-Clustering feature, you can bring the shared disk online and format it on one of the servers. Don’t change anything on the second server. On the second server, the disk stays offline.

After a refresh of the disk management, you can see something similar to this:

Server 1 Disk Management (disk status online)


Server 2 Disk Management (disk status offline)

Failover Cluster readiness check

Before we create the cluster, we need to make sure that everything is set up properly. Start the Failover Cluster Manager from the start menu and scroll down to the management section and click Validate Configuration.

Select the two servers for validation.

Run all tests. There is also a description of which solutions Microsoft supports.

After you made sure that every applicable test passed with the status “successful,” you can create the cluster by using the checkbox Create the cluster now using the validated nodes, or you can do that later. If you have errors or warnings, you can use the detailed report by clicking on View Report.

Create the cluster

If you choose to create the cluster by clicking on Create Cluster in the Failover Cluster Manager, you will be prompted again to select the cluster nodes. If you use the Create the cluster now using the validated nodes checkbox from the cluster validation wizard, then you will skip that step. The next relevant step is to create the Access Point for Administering the Cluster. This will be the virtual object that clients will communicate with later. It is a computer object in Active Directory.

The wizard asks for the Cluster Name and IP address configuration.

As a last step, confirm everything and wait for the cluster to be created.

The wizard will add the shared disk automatically to the cluster per default. If you did not configure it yet, then it is also possible afterwards.

As a result, you can see a new Active Directory computer object named WFC2019.

You can ping the new computer to check whether it is online (if you allow ping on the Windows firewall).

As an alternative, you can create the cluster also with PowerShell. The following command will also add all eligible storage automatically:

New-Cluster -Name WFC2019 -Node SRV2019-WFC1, SRV2019-WFC2 -StaticAddress 172.21.237.32

You can see the result in the Failover Cluster Manager in the Nodes and Storage > Disks sections.

The picture shows that the disk is currently used as a quorum. As we want to use that disk for data, we need to configure the quorum manually. From the cluster context menu, choose More Actions > Configure Cluster Quorum Settings.

Here, we want to select the quorum witness manually.

Currently, the cluster is using the disk configured earlier as a disk witness. Alternative options are the file share witness or an Azure storage account as witness. We will use the file share witness in this example. There is a step-by-step how-to on the Microsoft website for the cloud witness. I always recommend configuring a quorum witness for proper operations. So, the last option is not really an option for production.

Just point to the path and finish the wizard.

After that, the shared disk is available for use for data.

Congratulations, you have set up a Microsoft failover cluster with one shared disk.

Next steps and backup

One of the next steps would be to add a role to the cluster, which is out of scope of this article. As soon as the cluster contains data, it is also time to think about backing up the cluster. Veeam Agent for Microsoft Windows can back up Windows failover clusters with shared disks. We also recommend doing backups of the “entire system” of the cluster. This also backs up the operating systems of the cluster members. This helps to speed up restore of a failed cluster node, as you don’t need to search for drivers, etc. in case of a restore.


This article was provided by our service partner : Veeam

Security

Is Being Secure and Being Compliant the Same Thing?

Often times, when I ask a partner if they’re offering security to their SMB customers, their answer revolves around consulting on compliance. Verticals like healthcare, financial, government, and retail are low-hanging fruit for security revenue opportunities because compliance is a requirement of being in business.

However, being secure and being compliant are NOT the same. Did you know that you can be compliant without being fully secure? While being compliant increases data protection and keeps organizations from paying hefty fines, it’s simply not enough. If that’s what you’re relying on to keep you and your customers safe, you’d be sorely mistaken.

Being compliant is like following a strict nutritionist-approved diet to stay healthy.

While that’s a good practice, and it will certainly help, it’s also very important that you know your family’s medical history and how that could impact your health in the future (your risks) so you can make necessary, and maybe even lifesaving decisions. If you ignored your risks and only stuck to a good diet, you might be blindsided at a doctor’s appointment to learn that you have a certain hereditary disease.

“If we had only caught this sooner…”

Many MSPs are approaching security when an incident occurs, while others are being proactive to meet their customer’s compliance requirements. They’re not thinking of the broader picture of risk. You need to fully understand your risks to ensure that you and your customers are secure. Don’t wait until disaster strikes.

Let’s dive into the differences between the two phrases.

Being Compliant

What does it mean to be compliant? Is that enough? Go on reading and see now what it is.

Regulatory compliance describes the goal that organizations aspire to achieve in their efforts to ensure they are aware of and take steps to comply with relevant laws, policies, and regulations, such as PCI, HIPAA, GDPR, and DFARS.

We’ve heard of several companies making news headlines regarding security breaches. The court will determine if there was negligence in adhering to regulations and taking the proper legally required steps to protect their data properly. If the company is found not to be compliant, there are heavy financial consequences.

How much are we talking? Yahoo’s loss of 3 billion user accounts cost them an estimated $350 million off their sales price.

Needless to say, there’s a big incentive for companies to cover the basics when it comes to security. However, if you stop at just being compliant, you’re essentially only doing the bare minimum, whatever is legally required.

It’s a starting point.

Being Secure

The next step is to ensure security. Go above and beyond.

According to Cisco, “Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes.”

When hackers attack your business, it’s not just your business that’s at stake. By getting access to your database, hackers gain access to all your customers. So, we could consider ensuring cybersecurity as a social responsibility (not just a legal one).

We believe in doing business this way, going above and beyond, and have adopted the NIST Cybersecurity Framework. It consists of standards, guidelines, and best practices to manage cybersecurity-related risks as an ongoing practice.

As leaders in the IT industry, we’re all constantly looking to others who are doing things well and subscribe to best practices in several other areas of business. Cybersecurity is no different.

The framework encourages identifying your risks proactively, so you can take the necessary steps in reducing and managing your risks.

How to Assess Risks

We know what you’re thinking, “Easier said than done, though, right? Just another thing to add to my to-do list.”

This process doesn’t have to be overwhelming. Knowing where to start is half the battle. Smart security offerings start with a risk assessment that allows you to proactively identify security risks across your entire business as well as your customers, not just on their network. The result is an easy-to-understand, customized risk report showing your customer their most critical risks and recommendations for how to remediate those risks.

Next Steps

The bottom line: be compliant AND secure. Start by understanding your legal compliance responsibilities to protect yourself and your customers during a disaster. Then, take it a step further—assess and fully understand your security risks and develop a plan to reduce your risks.


This article was provided by our service partner : Connectwise

vmware vsphere

10 Things To Know About vSphere Certificate Management

With security and compliance on the minds of IT staff everywhere, vSphere certificate management is a huge topic. Decisions made can seriously affect the effort it takes to support a vSphere deployment, and often create vigorous discussions between CISO and information security staff, virtualization admins, and enterprise PKI/certificate authority admins. Here are ten things that organizations should consider when choosing a path forward.

1. Certificates are about encryption and trust

Certificates are based on public key cryptography, a technique developed by mathematicians in the 1970s, both in the USA and Britain. These techniques allow someone to create two mathematical “keys,” public and private. You can share the public key with another person, who can then use it to encrypt a message that can only be read by the person with the private key.

When we think about certificates we often think of the little padlock icon in our browser, next to the URL. Green and locked means safe, and red with an ‘X’ and a big “your connection is not private” warning means we’re not safe, right? Unfortunately, it’s a lot more complicated than that. A lot of things need to be true for that icon to turn green.

When we’re using HTTPS the communications between our web browser and a server are sent across a connection protected with Transport Layer Security (TLS). TLS is the successor to Secure Sockets Layer, or SSL, but we often refer to them interchangeably. TLS has four versions now:

  • Version 1.0 has vulnerabilities, is insecure, and shouldn’t be used anymore.
  • Version 1.1 doesn’t have the vulnerabilities as 1.0 but it uses MD5 and SHA-1 algorithms which are both insecure.
  • Version 1.2 adds AES cryptographic ciphers that are faster, removes some insecure ciphers, and switches to SHA-256. It is the current standard.
  • Version 1.3 removes weak ciphers and adds features that increase the speed of connections. It is the upcoming standard.

Using TLS means that your connection is encrypted, even if the certificates are self-signed and generate warnings. Signing a certificate means that someone vouches for that certificate, in much the same way as a trusted friend would introduce someone new to you. A self-signed certificate simply means that it’s vouching for itself, not unlike a random person on the street approaching you and telling you that they are trustworthy. Are they? Maybe, but maybe not. You don’t know without additional information.

To get the green lock icon you need to trust the certificate by trusting who signed it. This is where a Certificate Authority (CA) comes in. Certificate Authorities are usually specially selected and subject to rigorous security protocols, because they’re trusted implicitly by web browsers. Having a CA sign a certificate means you inherit the trust from the CA. The browser lock turns green and everything seems secure.

Having a third-party CA sign certificates can be expensive and time-consuming, especially if you need a lot of them (and nowadays you do). As a result, many enterprises create their own CAs, often using the Microsoft Active Directory Certificate Services, and teach their web browsers and computers to trust certificates signed by that CA by importing the “root CA certificates” into the operating systems.

2. vSphere uses certificates extensively

All communications inside vSphere are protected with TLS. These are mainly:

  • ESXi certificates, issued to the management interfaces on all the hosts.
  • “Machine” SSL certificates used to protect the human-facing components, like the web-based vSphere Client and the SSO login pages on the Platform Service Controllers (PSCs).
  • “Solution” user certificates used to protect the communications of other products, like vRealize Operations Manager, vSphere Replication, and so on.

The vSphere documentation has a full list. The important point here is that in a fully-deployed cluster the number of certificates can easily reach into the hundreds.

3. vSphere has a built-in certificate authority

Managing hundreds of certificates can be quite a daunting task, so VMware created the VMware Certificate Authority (VMCA). It is a supported and trusted component of vSphere that runs on a PSC or on the vCenter VCSA in embedded mode. Its job is to automate the management of certificates that are used inside a vSphere deployment. For example, when a new host is attached to vCenter it asks you to verify the thumbprint of the host ESXi certificate, and once you confirm it’s correct the VMCA will automatically replace the certificates with ones issued by the VMCA itself. A similar thing happens when additional software, like vRealize Operations Manager or VMware AppDefense is installed.

The VMCA is part of the vCenter infrastructure and is trusted in the same way vCenter is. It’s patched when you patch your PSCs and VCSAs. It is sometimes criticized as not being a full-fledged CA but it is just-enough-CA, purpose-built to serve a vSphere deployment securely, safely, and in an automated way to make it easy to be secure.

4. There are four main ways to manage certificates

  1. First, you can just use a self-signed CA certificate. The VMCA is fully-functional once vCenter is installed and automatically creates root certificates to use for signing ESXi, machine, and solution certificates. You can download the root certificates from the main vCenter web page and import them into your operating systems to establish trust and turn the browser lock icon green for both vCenter and ESXi. This is the easiest solution but it requires you to accept a self-signed CA root certificate. Remember, though, that we trust vCenter, so we trust the VMCA.
  2. Second, you can make the VMCA an intermediate, or subordinate, CA. We do not recommend this (see below).
  3. Third, you can disable the VMCA and use custom certificates for everything. To do this you can ask the certificate-manager tool to generate Certificate Signing Requests (CSRs) for everything. You take those to a third-party CA, have them signed, and then install them all manually. This is time-consuming and error-prone.
  4. Fourth, you can use “hybrid” mode to replace the machine certificates (the human-facing certificates for vCenter) with custom certificates, and let the VMCA manage everything else with its self-signed CA root certificates. All users of vCenter would then see valid, trusted certificates. If the virtualization infrastructure admin team desires they can import the CA root certificates to just their workstations and then they’ll have green lock icons for ESXi, too, as well as warnings if there is an untrusted certificate. This is the recommended solution for nearly all customers because it balances the desire for vCenter security with the realities of automation and management.

5. Enterprise CAs are self-signed, too

“But wait,” you might be thinking, “we are trying to get rid of self-signed certificates, and you’re advocating their use.” True, but think about it this way: enterprise CAs are self-signed, too, and you have decided to trust them. Now you simply have two CAs, and while that might seem like a problem it really means that a separation exists between the operators of the enterprise CA and the virtualization admin team, for security, organizational politics, staff workload management, and even troubleshooting. Because we trust vCenter, as the core of our virtualization management, we also implicitly trust the VMCA.

6. Don’t create an intermediate CA

You can create an intermediate CA, also known as a subordinate CA, by issuing the VMCA a root CA certificate capable of signing certificates on behalf of the enterprise CA and using the Certificate Manager to import it. While this has applications, it is generally regarded as unsafe because anybody with access to that CA root key pair can now issue certificates as the enterprise CA. We recommend maintaining the security & trust separation between the enterprise CA and the VMCA and not using the intermediate CA functionality.

7. You can change the information on the self-signed CA root certificate

Using the Certificate Manager utility you can generate new VMCA root CA certificates with your own organizational information in them, and the tool will automate the reissue and replacement of all the certificates. This is a popular option with the Hybrid mode, as it makes the self-signed certificates customized and easy to identify. You can also change the expiration dates if you dislike the defaults.

8. Test, test, test!

The only way to truly be comfortable with these types of changes is to test them first. The best way to test is with a nested vSphere environment, where you install a test VCSA as well as ESXi inside a VM. This is an incredible way to test vSphere, especially if you shut it down and take a snapshot of it. Then, no matter what you do, you can restore the test environment to a known good state. See the links at the end for more information on nested ESXi.

Another interesting option is using the VMware Hands-on Labs to experiment with this. Not only are the labs a great way to learn about VMware products year-round, they’re also great for trying unscripted things out in a low-risk way. Try the new vSphere 6.7 Lightning Lab!

9. Make backups

When the time comes to do this for real make sure you have a good file-based backup of your vCenter and PSCs using the VAMI interface. Additionally, the Certificate Manager utility backs up the old certificates, so you can restore them if needed (only one set, though, so think that through). This way you can restore them if things go wrong. If things do not go as planned or tested know that these operations are fully supported by VMware Global Support Services, who can walk you through resolving any problem you might encounter.

10. Know why you’re doing this

In the end the choice of how you manage vSphere certificates depends on what your goals are.

  • Do you want that green lock icon?
  • Does everybody need the green lock icon for ESXi, or just the virtualization admin team?
  • Do you want to get rid of self-signed certificates, or are you more interested in establishing trust?
  • Why do you trust vCenter as the core of your infrastructure but not a subcomponent of vCenter?
  • What is the difference in trust between the enterprise self-signed CA root and the VMCA self-signed CA root?
  • Is this about compliance, and does the compliance framework truly require custom CA certificates?
  • What is the cost, in staff time and opportunity cost, of ignoring the automated certificate solution in favor of manual replacements?
  • Does the solution decrease or increase risk, and why?

Whatever you decide know that thousands of organizations across the world have asked the same questions, and out of the discussions have come good understandings of certificates & trust as well as better relations between security and virtualization admin teams.


This article was provided by our service partner : Vmware