Posts

Technical Support

Top Pitfalls of the Internal IT Team (and How to Avoid Them)

To handle the deluge of daily tasks, your internal IT team needs to run like a well-oiled machine. From managing security and ticket flow to conducting routine maintenance and proactive monitoring, your team requires expert efficiency to stay at the top of their game.

But all too often, common pitfalls can complicate your to-do list, creating extra work for your team. Recognizing these time traps is the first step to avoiding them—the second step is developing a fool-proof plan to avoid them in the future.

Pitfall #1: Windows 10 and the Perils of Patching

Consider this: When a new patch is released, hackers immediately swoop in to compare the update to the preexisting operating system. This helps them identify where the security loophole is—then use the information to exploit end users and corporations that are slow to patch the breach. In 2018, attackers exploited patch updates to steal valuable personal data on users and payment information. While these instances only account for 6% of the year’s total breaches, the negative consequences for those attacked are profound.

For years, Patch Tuesday helped IT teams keep track of the Microsoft® software updates. But with Windows 10, system fixes are no longer released on such a predictable schedule. That doesn’t mean patching stops being a top priority though. Managing patching is essential to safeguarding your software and machines against external threats.

The best way to avoid this common pitfall is to standardize your team’s policy for automatic Windows 10 patching. Set up alerts to update your team as soon as a patch is released—and enable broad discovery capabilities that cover your company’s entire inventory of production systems. Remember: It only takes one vulnerable computer to put your entire network at risk.

Pitfall #2: Incorrect Ticket Routing

Your internal IT team may field countless help desk tickets a day, and to maintain your high level of customer service, the pressure is on to correctly route each ticket to an expert technician. But with manual routing, there are endless opportunities for mistakes. What’s more, if your team isn’t routing tickets based on knowledgeable resources, you’re creating delays that can throw off the entire routing process—which is bad for business.

To avoid the obstacles that come with ticket routing, consider ditching manual in favor of an automatic ticket routing program. Workflow automation is quickly becoming an industry standard. As more IT teams make the switch, it’s becoming increasingly important that you do the same.

Pitfall #3: The Flaws of Manual Processes

Try as you may, human error is impossible to avoid. And in a complex IT environment, manual processes create the potential for errors that can put your entire workflow—and your system’s security—at risk.

What if a real emergency hits and your team needs to respond quickly? In this instance, the small, day-to-day IT tasks should be set aside in order to deal with the bigger problem. But if these tasks still follow a manual process, forgetting them is out of the question.

The best way to avoid this scenario is by nixing manual processes in favor of automation, whenever possible. For cases where manual is still essential, make sure the process is formalized—and that your team is fully trained to follow protocol.

Pitfall #4: Maintaining an Inventory of Assets

It’s up to you to keep your team’s project on track—but when information is owned by multiple managers and dispersed across countless spreadsheets and documents, project management can be next to impossible. And the more complex the project, the larger your inventory of assets. Talk about an organizational pitfall.

To avoid asset chaos, you need to find a solution that allows key documents, data, and configurations to be readily available to the team members that need it most. This will cut down on the time you spend pinging John for that report or chasing down Sarah for that serial number.

There are a number of solutions that allow for easy asset inventories. Many companies opt for free options, like Google Drive, to cut down on costs. But in doing so, you often lose out on optimal information security. More advanced options at a price are designed to expertly guard and intuitively aggregate assets—meaning everything is kept organized and safe.

Pitfall #5: Repetitive Admin Tasks

There are some things in life we, unfortunately, can’t avoid. Admin tasks often fall into that category. But these day-to-day to-do items, like tracking time, aren’t just tedious—they take valuable time away from other, more vital tasks. When this happens, either the admin tasks aren’t completed, or the more important responsibilities aren’t completed up to par—a no-win situation.

Ask yourself a question: Is your time really best spent making sure your techs are entering their time or tracking down that rogue endpoint? No, probably not. In lieu of hiring a full-time administrative assistant, try using a program that can consistently complete admin tasks. This way, you can focus your attention on bigger, more important projects while the tedious—but necessary—tasks get done.

Pitfall #6: More Reactive Than Proactive

As an expert IT professional, you can spend a lot of time putting out fires. System bugs, holes in security—whatever the issue, once you fall into the habit of taking a reactive approach to problems, you’re already losing efficiency. And when you’re inefficient, the end users’ productivity is plummeting.

A proactive approach to solving internal infrastructure issues is far superior, allowing you to fix infrastructure issues before they happen. The right software can help make this process that much easier; but before choosing one, consider these two key components of proactive IT problem-solving.

First, you need the ability to easily monitor and remotely control sessions. This will give you valuable insight into your team’s workflow and efficiency. Second, search for a program that facilitates system response monitoring. This will help improve your overall response time, so you’ll spend less time putting fires out. Renewed speed will also impress your end users, earning your team a reputation for efficiency.

With the right product and processes in place, your team will gain a firmer grip on proactive operations—and be more prepared to tackle reactive situations.


This article was provided by our service partner Connectwise.com

webroot

What’s Next? Webroot’s 2019 Cybersecurity Predictions

At Webroot, we stay ahead of cybersecurity trends in order to keep our customers up-to-date and secure. As the end of the year approaches, our team of experts has gathered their top cybersecurity predictions for 2019. What threats and changes should you brace for?

General Data Protection Regulation Penalties

“A large US-based tech company will get hammered by the new GDPR fines.” – Megan Shields, Webroot Associate General Counsel

When the General Data Protection Regulation (GDPR) became law in the EU last May, many businesses scrambled to implement the required privacy protections. In anticipation of this challenge for businesses, it seemed as though the Data Protection Authorities (the governing organizations overseeing GDPR compliance) were giving them time to adjust to the new regulations. However, it appears that time has passed. European Data Protection Supervisor Giovanni Buttarelli spoke with Reuters in October and said the time for issuing penalizations is near. With GDPR privacy protection responsibilities now incumbent upon large tech companies with millions—if not billions—of users, as well as small to medium-sized businesses, noncompliance could mean huge penalties.

GDPR fines will depend on the specifics of each infringement, but companies could face damages of up to 4% of their worldwide annual turnover, or up to 20 million Euros, whichever is greater. For example, if the GDPR had been in place during the 2013 Yahoo breach affecting 3 billion users, Yahoo could have faced anywhere from $80 million to $160 million in fines. It’s also important to note that Buttarelli specifically mentions the potential for bans on processing personal data, at Data Protection Authorities’ discretion, which would effectively suspend a company’s data flows inside the EU.

AI Disruption

“Further adoption of AI leading to automation of professions involving low social intelligence and creativity. It will also give birth to more advanced social engineering attacks.” – Paul Barnes, Webroot Sr. Director of Product Strategy

The Fouth Industrial Revolution is here and the markets are beginning to feel it. Machine learning algorithms and applied artificial intelligence programs are already infiltrating and disrupting top industries. Several of the largest financial institutions in the world have integrated artificial intelligence into aspects of their businesses. Often these programs use natural language processing—giving them the ability to handle customer-facing roles more easily—to boost productivity.

From a risk perspective, new voice manipulation techniques and face mapping technologies, in conjunction with other AI disciplines, will usher in a new dawn of social engineering that could be used in advanced spear-phishing attacks to influence political campaigns or even policy makers directly.

AI Will Be Crucial to the Survival of Small Businesses

“AI and machine learning will continue to be the best way to respond to velocity and volume of malware attacks aimed at SMBs and MSP partners.” – George Anderson, Product Marketing Director

Our threat researchers don’t anticipate a decline in threat volume for small businesses in the coming year. Precise attacks, like those targeting RDP tools, have been on the rise and show no signs of tapering. Beyond that, the sheer volume of data handled by businesses of all types of small businesses raises the probability and likely severity of a breach.

If small and medium-sized businesses want to keep their IT teams from being inundated and overrun with alerts, false positives, and remediation requests, they’ll be forced to work AI and machine learning into their security solutions. Only machine learning can automate security intelligence accurately and effectively enough to enable categorization and proactive threat detection in near real time. By taking advantage of cloud computing platforms like Amazon Web Services, machine learning has the capability to scale with the increasing volume and complexity modern attacks, while remaining within reach in terms of price.

Ransomware is Out, Cryptojacking is In

We’ll see a continued decline in commodity ransomware prevalence. While ransomware won’t disappear, endpoint solutions are better geared to defend against suspicious ransom-esque actions and, as such, malware authors will turn to either more targeted attacks or more subtle cryptocurrency mining alternatives.” – Eric Klonowski, Webroot Principal Threat Research Analyst

Although we’re unlikely to see the true death of ransomware, it does seem to be in decline. This is due in large part to the success of cryptocurrency and the overwhelming demand for the large amounts of computing power required for cryptomining. Hackers have seized upon this as a less risky alternative to ransomware, leading to the emergence of cryptojacking.

Cryptojacking is the now too-common practice of injecting software into an unsuspecting system and using its latent processing power to mine for cryptocurrencies. This resource theft drags systems down, but is often stealthy enough to go undetected. We are beginning to feel the pinch of cryptojacking in critical systems, with a cryptomining operation recently being discovered on the network of a water utility system in Europe. This trend is on track to continue into the New Year, with detected attacks increasing by 141% in the first half of 2018 alone.

Targeted Attacks

“Attacks will become more targeted. In 2018, ransomware took a back seat to cryptominers and banking Trojans to an extent, and we will continue see more targeted and calculated extortion of victims, as seen with the Dridex group. The balance between cryptominers and ransomware is dependent upon the price of cryptocurrency (most notably Bitcoin), but the money-making model of cryptominers favors its continued use.” – Jason Davison, Webroot Advanced Threat Research Analyst

The prominence of cryptojacking in cybercrime circles means that, when ransomware appears in the headlines, it will be for calculated, highly-targeted attacks. Cybercriminas are now researching systems ahead of time, often through backdoor access, enabling them to encrypt their ransomware against the specific antivirus applications put in place to detect it.

Government bodies and healthcare systems are prime candidates for targeted attacks, since they handle sensitive data from large swaths of the population. These attacks often have costs far beyond the ransom itself. The City of Atlanta is currently dealing with $17 million in post-breach costs. (Their perpetrators asked for $51,000 in Bitcoin, which the city refused to pay.)

The private sector won’t be spared from targeting, either. A recent Dharma Bip ransomware attack on a brewery involved attackers posting the brewery’s job listing on an international hiring website and submitting a resume attachment with a powerful ransomware payload.

Zero Day Vulnerabilities

“Because the cost of exploitation has risen so dramatically over the course of the last decade, we’ll continue to see a drop in the use of zero days in the wild (as well as associated private exploit leaks). Without a doubt, state actors will continue to hoard these for use on the highest-value targets, but expect to see a stop in Shadowbrokers-esqueoccurrences. Leaks probably served as a powerful wake-up call internally with regards to access to these utilities (or perhaps where they’re left behind). – Eric Klonowski, Webroot Principal Threat Research Analyst

Though the cost of effective, zero-day exploits is rising and demand for these exploits has never been higher, we predict a decrease in high-profile breaches. Invariably, as large software systems become more adept at preventing exploitation, the amount of expertise required to identify valuable software vulnerabilities increases with it. Between organizations like the Zero Day Initiative working to keep these flaws out of the hands of hackers and governmental bodies and intelligence agencies stockpiling security flaws for cyber warfare purposes, we are likely to see fewer zero day exploits in the coming year.

However, with the average time between the initial private discovery and the public disclosure of a zero day vulnerability being about 6.9 years, we may just need to wait before we hear about it.

The take-home? Pay attention, stay focused, and keep an eye on this space for up-to-the-minute information about cybersecurity issues as they arise.


This article was provided by our service partner : Webroot

Patch Management Practices

Patch Management Practices to Keep Your Clients Secure

Develop a Policy of Who, What, When, Why, and How for Patching Systems

The first step in your patch management strategy is to come up with a policy around the entire patching practice. Planning in advance enables you to go from reactive to proactive—anticipating problems in advance and develop policies to handle them.

The right patch management policy will answer the who, what, when, why, and how for when you receive a notification of a critical vulnerability in a client’s software.

Create a Process for Patch Management

Now that you’ve figured out the overall patch management policy, you need to create a process on how to handle each patch as they’re released.

Your patch management policy should be explicit within your security policy, and you should consider Microsoft’s® six-step process when tailoring your own. The steps include:

Notification: You’re alerted about a new patch to eliminate a vulnerability. How you receive the notification depends on which tools you use to keep systems patched and up to date.

Assessment: Based on the patch rating and configuration of your systems, you need to decide which systems need the patch and how quickly they need to be patched to prevent an exploit.

Obtainment: Like the notification, how you receive the patch will depend on the tools you use. They could either be deployed manually or automatically based on your determined policy.

Testing: Before you deploy a patch, you need to test it on a test bed network that simulates your production network. All networks and configurations are different, and Microsoft can’t test for every combination, so you need to test and make sure all your clients’ networks can properly run the patch.

Deployment: Deployment of a patch should only be done after you’ve thoroughly tested it. Even after testing, be careful and don’t apply the patch to all your systems at once. Incrementally apply patches and test the production server after each one to make sure all applications still function properly.

Validation: This final step is often overlooked. Validating that the patch was applied is necessary so you can report on the status to your client and ensure agreed service levels are met.

Be Persistent in Applying the Best Practices

For your patch management policies and processes to be effective, you need to be persistent in applying them consistently. With new vulnerabilities and patches appearing almost daily, you need to be vigilant to keep up with all the changes.

Patch management is an ongoing practice. To ensure you’re consistently applying patches, it’s best to follow a series of repeatable, automated practices. These practices include:

  • Regular rediscovery of systems that may potentially be affected
  • Scanning those systems for vulnerabilities
  • Downloading patches and patch definition databases
  • Deploying patches to systems that need them
Take Advantage of Patching Resources

Since the release of Windows 10, updates to the operating system are on a more fluid schedule. Updates and patches are now being released as needed and not on a consistent schedule. You’ll need to let your team know when an applicable update is released to ensure the patch can be tested and deployed as soon as possible.

As the number of vulnerabilities and patches rise, you’ll need to have as much information about them as you can get. There are a few available resources we recommend to augment your patch management process and keep you informed of updates that may fall outside of the scope of Microsoft updates.

Utilize Patching Tools

You don’t want your technicians spending most of their time approving and applying patches on individual machines, especially as your business grows and you take on more clients. To take the burden off your technicians, you’ll want to utilize a tool that can automate your patch management processes. This can be accomplished with a remote monitoring and management (RMM) platform, like ConnectWise Automate®. Add-ons can be purchased to manage third-party application patching to sure up all potential vulnerabilities.

Patch management is a fundamental service provided in most managed service provider (MSP) service plans. With these best practices, you’ll be able to develop a patch management strategy to best serve your clients and their specific needs.


This article was provided by our service partner : connectwise.com

msp evolving threats

MSP Responding to Risk in an Evolving Threat Landscape

There’s a reason major industry players have been discussing cybersecurity more and more: the stakes are at an all-time high for virtually every business today. Cybersecurity is not a matter businesses can afford to push off or misunderstand—especially small and medium-sized businesses (SMBs), which have emerged as prime targets for cyberattacks. The risk level for this group in particular has increased exponentially, with 57% of SMBs reporting an increase in attack volume over the past 12 months, and the current reality—while serious—is actually quite straightforward for managed service providers (MSPs):

  • Your SMB clients will be attacked.
  • Basic security will not stop an attack.
  • The MSP will be held accountable.

While MSPs may have historically set up clients with “effective” security measures, the threat landscape is changing and the evolution of risk needs to be properly, and immediately, addressed. This means redefining how your clients think about risk and encouraging them to respond to the significant increase in attack volume with security measures that will actually prove effective in today’s threat environment.

Even if the security tools you’ve been leveraging are 99.99% effective, risk has evolved from minimal to material due simply to the fact that there are far more security events per year than ever before.

Again, the state of cybersecurity today is pretty straightforward: with advanced threats like rapidly evolving and hyper-targeted malware, ransomware, and user-enabled breaches, foundational security tools aren’t enough to keep SMB clients secure. Their data is valuable, and there is real risk of a breach if they remain vulnerable.Additional layers of security need to be added to the equation to provide holistic protection. Otherwise, your opportunity to fulfill the role as your clients’ managed security services providerwill be missed, and your SMB clients could be exposed to existential risk.

Steps for Responding to Heightened Risk as an MSP

 

Step 1: Understand Risk

Start by discussing “acceptable risk.” Your client should understand that there will always be some level of risk in today’s cyber landscape. Working together to define a businesses’ acceptable risk, and to determine what it will take to maintain an acceptable risk level, will solidify your partnership. Keep in mind that security needs to be both proactive and reactive in its capabilities for risk levels to remain in check.

Step 2: Establish Your Security Strategy

Once you’ve identified where the gaps in your client’s protection lie, map them to the type of security services that will keep those risks constantly managed. Providing regular visibility into security gaps, offering cybersecurity training,and leveraging more advanced and comprehensive security tools will ultimately get the client to their desired state of protection—and that should be clearly communicated upfront.

Step 3: Prepare for the Worst

At this point, it’s not a question of ifSMBs will experience a cyberattack, but when. That’s why it’s important to establish ongoing, communicative relationships with all clients. Assure clients that your security services will improve their risk level over time, and that you will maintain acceptable risk levels by consistently identifying, prioritizing, and mitigating gaps in coverage. This essentially justifies additional costs and opens you to upsell opportunities over the course of your relationship.

Step 4: Live up to Your Promises Through People, Processes, and Technology

Keeping your security solutions well-defined and client communication clear will help validate your offering. Through a combination of advanced software and services, you can build a framework that maps to your clients’ specific security needs so you’re providing the technologies that are now essential for securing their business from modern attacks.

Once you understand how to effectively respond to new and shifting risks, you’ll be in the best possible position to keep your clients secure and avoid potentially debilitating breaches.

Windows Server 2019

Windows Server 2019 and what we need to do now: Migrate and Upgrade!

IT pros around the world were happy to hear that Windows Server 2019 is now generally available and since there have been some changes to the release. This is a huge milestone, and I would like to offer congratulations to the Microsoft team for launching the latest release of this amazing platform as a big highlight of Microsoft Ignite.

As important as this new operating system is now, there is an important subtle point that I think needs to be raised now (and don’t worry – Veeam can help). This is the fact that both SQL Server 2008 R2 and Windows Server 2008 R2 will soon have extended support ending. This can be a significant topic to tackle as many organizations have applications deployed on these systems.

What is the right thing to do today to prepare for leveraging Windows Server 2019? I’m convinced there is no single answer on the best way to address these systems; rather the right approach is to identify options that are suitable for each workload. This may also match some questions you may have. Should I move the workload to Azure? How do I safely upgrade my domain functional level? Should I use Azure SQL? Should I take physical Windows Server 2008 R2 systems and virtualize them or move to Azure? Should I migrate to the latest Hyper-V platform? What do I do if I don’t have the source code? These are all indeed natural questions to have now.

These are questions we need to ask today to move to Windows Server 2019, but how do we get there without any surprises? Let me re-introduce you to the Veeam DataLab. This technology was first launched by Veeam in 2010 and has evolved in every release and update since. Today, this technology is just what many organizations need to safely perform tests in an isolated environment to ensure that there are no surprises in production. The figure below shows a data lab:

windows 2008 eol

Let’s deconstruct this a bit first. An application group is an application you care about — and it can include multiple VMs. The proxy appliance isolates the DataLab from the production network yet reproduces the IP space in the private network without interference via a masquerade IP address. With this configuration, the DataLab allows Veeam users to test changes to systems without risk to production. This can include upgrading to Windows Server 2019, changing database versions, and more. Over the next weeks and month or so, I’ll be writing a more comprehensive document in whitepaper format that will take you through the process of setting up a DataLab and doing specific task-like upgrading to Windows Server 2019 or a newer version of SQL Server as well as migrating to Azure.

Another key technology where Veeam can help is the ability to restore Veeam backups to Microsoft Azure. This technology has been available for a long while and is now built into Veeam Backup & Replication. This is a great way to get workloads into Azure with ease starting from a Veeam backup. Additionally, you can easily test other changes to Windows and SQL Server with this process — put it into an Azure test environment to test the migration process, connectivity and more. If that’s a success, repeat the process as part of a planned migration to Azure. This cloud mobility technique is very powerful and is shown below for Azure:

Windows 2008 EOL

Why Azure?

This is because Microsoft announced that Extended Security Updates will be available for FREE in Azure for Windows server 2008 R2 for an additional three years after the end of the support deadline. Customers can rehost these workloads to Azure with no application code changes, giving them more time to plan for their future upgrades. Read more here.

What also is great about moving workloads to Azure is that this applies to almost anything that Veeam can back up. Windows Servers, Linux Agents, vSphere VMs, Hyper-V VMs and more!

Migrating to the latest platforms are a great way to stay in a supported configuration for critical applications in the data center. The difference is being able to do the migration without any surprises and with complete confidence. This is where Veeam’s DataLabs and Veeam Recovery to Microsoft Azure can work in conjunction to provide you a seamless experience in migrating to the latest SQL and Windows Server platforms.

Have you started testing Windows Server 2019? How many Windows Server 2008 R2 and SQL Server 2008 systems do you have? Let’s get DataLabbing!

Vendor management

An MSP Guide to Happy Customers

Shawn Lazarus brings a fresh take to marketing strategy with his engineering background and global perspective. He currently oversees the product marketing, social media, digital marketing, and brand management for OnPage which helps an MSP server their clients better.

An MSP’s ability to do effective work depends on their technical expertise. However, their ability to ensure customer satisfaction is what really grows their business. This is because high levels of customer satisfaction retain current clients and win over future ones. Given this reality, an MSP needs to be strategic about how they approach their work as every interaction is an opportunity to improve customer satisfaction.

With a few customer service guidelines, you can ensure you’re not only impressing potential clients, but also keeping existing ones happy. Not sure where to start? The tips below will help you improve communications with your clients, as well as establish business practices that reinforce the importance of customer relationships.

Develop a Strong Onboarding Policy

The first way to establish a strong foundation for customer satisfaction is to develop a strong onboarding policy. Onboarding defines the process of how new customers get integrated into your company’s workflow. The onboarding process should include straight-forward tasks such as cataloging a customer’s infrastructure, migrating the customer to a standard set of technology offerings, refreshing old hardware, and rolling in standard MSP tool sets.

However, the most important aspect of onboarding is documentation. Document the key processes for any new client you onboard so that when their technology is not functioning properly, your engineers can come in and properly assess and diagnose the problem. This documentation includes important items such as site details, site plans, and credentials for logging in to important software.

These expectations should be articulated during customer training so that the customer is apprised of how to contact your company and what to expect when they call during business hours or after hours. After their onboarding is complete, customers should know the answer to basic service questions, such as how long they should expect to wait until someone returns their call.

Set Client Expectations with SLAs

After onboarding a new client, a good MSP will provide them with a plan that offers a clearly defined time period in which issues will be fixed. This plan will also promise the client updates on all key stages of the incident resolution process. This is the essence of the MSP’s service level agreement (SLA). A strong SLA will help ensure you’re managing expectations, communicating effectively, and executing properly.

  • Manage Expectations: Focus on setting expectations around the time it will take you to call customers back, how long it will take until a tech is on site to fix an issue, and what a typical maintenance schedule will look like. Map out the customer journey and describe what the customer can expect from your business from beginning to end.
  • Effectively Communicate: There is often a lot of stumbling with communication. For effective client communications, develop a plan that helps you articulate the big picture. The plan should explain what happens when an issue arises for the customer and how you will respond. If there is the need for disaster recovery, how will you handle the process to get them back online?
  • Execute Properly: Effective execution is about getting the project done in a timely manner. When you’re unable to effectively execute, it’s likely because you lack either the time or personnel to deliver on what you’ve promised. You can accomplish this with two essential elements: collaboration and coordination. A business management software like ConnectWise Manage® enables you to easily centralize documentation, assign action items, track progress toward due dates, and report down to individual tasks seamlessly—ensuring no late or unfinished projects. In addition, employing an Incident Management tool like OnPage allows you to effectively execute alerts that come in from ConnectWise Manage in a timely manner. MSPs can equip even the smallest of staffs with an Incident Management Platform that allows the team to work after hours in shifts and handle alerts that are deftly sent to their smartphones, so they can attend to the alerts anytime, anywhere.
Leverage Automation

How much time is your staff spending on repetitive (yet necessary) tasks? How much more time would they have during the day if some of these tasks were automated? When repetitive yet necessary tasks are performed automatically with the right tools, the work is completed quickly and with a lower risk of human error. This also enables your staff to apply their advanced knowledge and expertise to more critical projects.

Some repeatable tasks you can automate include prioritizing tickets, routine maintenance, software upgrades, disk updates, patching, or end-virus remediation. By having tools automate these processes, your staff can spend more time on projects, planning, or providing proactive service to your customers—keeping them happy.

To take full advantage of the benefits of business automation, you’ll need tools such as a remote monitoring and management system, a ticketing system, a customer relationship management system (CRM), anti-virus solution, a remote access tool, malware detection solution, a critical alerting tool, and PBX systems—depending on your business’s specific needs.

While keeping customers happy sounds like it should be straightforward, anyone who has worked as an MSP knows that customer satisfaction is the secret sauce that separates one MSP from another. Simple things like clear communications with your clients and introducing a few new tools to your tool kit can make a huge difference in how your clients see you. By setting clear expectations and freeing up your staff from manual, repeatable tasks, you can develop strong relationships with new customers and keep your existing customers happy.


This article was provided by our service partner : Connectwise

Incident Response

6 Steps to Build an Incident Response Plan

According to the Identity Theft Research Center, 2017 saw 1,579 data breaches—a record high, and an almost 45 percent increase from the previous year. Like many IT service providers, you’re probably getting desensitized to statistics like this. But you still have to face facts: organizations will experience a security incident sooner or later. What’s important is that you are prepared so that the impact doesn’t harm your customers or disrupt their business.

Although, there’s a new element that organizations—both large and small—have to worry about: the “what.” What will happen when I get hacked? What information will be stolen or exposed? What will the consequences look like?

While definitive answers to these questions are tough to pin down, the best way to survive a data breach is to preemptively build and implement an incident response plan. An incident response plan is a detailed document that helps organizations respond to and recover from potential—and, in some cases, inevitable—security incidents. As small- and medium-sized businesses turn to managed services providers (MSPs) like you for protection and guidance, use these six steps to build a solid incident response plan to ensure your clients can handle a breach quickly, efficiently, and with minimal damage.

Step 1: Prepare

The first phase of building an incident response plan is to define, analyze, identify, and prepare. How will your client define a security incident? For example, is an attempted attack an incident, or does the attacker need to be successful to warrant response? Next, analyze the company’s IT environment and determine which system components, services, and applications are the most critical to maintaining operations in the event of the incident you’ve defined. Similarly, identify what essential data will need to be protected in the event of an incident. What data exists and where is it stored? What’s its value, both to the business and to a potential intruder? When you understand the various layers and nuances of importance to your client’s IT systems, you will be better suited to prepare a templatized response plan so that data can be quickly recovered.

Treat the preparation phase as a risk assessment. Be realistic about the potential weak points within the client’s systems; any component that has the potential for failure needs to be addressed. By performing this assessment early on, you’ll ensure these systems are maintained and protected, and be able to allocate the necessary resources for response, both staff and equipment—which brings us to our next step.

Step 2: Build a Response Team

Now it’s time to assemble a response team—a group of specialists within your and/or your clients’ business. This team comprises the key people who will work to mitigate the immediate issues concerning a data breach, protecting the elements you’ve identified in step one, and responding to any consequences that spiral out of such an incident.

As an MSP, one of your key functions will sit between the technical aspects of incident resolution and communication between other partners. In an effort to be the virtual CISO (vCISO) for your clients’ businesses, you’ll likely play the role of Incident Response Manager who will oversee and coordinate the response from a technical and procedural perspective.

Pro Tip: For a list of internal and external members needed on a client’s incident response team, check out this in-depth guide.

Step 3: Outline Response Requirements and Resolution Times

From the team you assembled in step two, each member will play a role in detecting, responding, mitigating damage, and resolving the incident within a set time frame. These response and resolution times may vary depending on the type of incident and its level of severity. Regardless, you’ll want to establish these time frames up front to ensure everyone is on the same page.

Ask your clients: “What will we need to contain a breach in the short term and long term? How long can you afford to be out of commission?” The answers to these questions will help you outline the specific requirements and time frame required to respond to and resolve a security incident.

If you want to take this a step further, you can create quick response guides that outline the team’s required actions and associated response times. Document what steps need to be taken to correct the damage and to restore your clients’ systems to full operation in a timely manner. If you choose to provide these guides, we suggest printing them out for your clients in case of a complete network or systems failure.

Step 4: Establish a Disaster Recovery Strategy

When all else fails, you need a plan for disaster recovery. This is the process of restoring and returning affected systems, devices, and data back onto your client’s business environment.

A reliable backup and disaster recovery (BDR) solution can help maximize your clients’ chances of surviving a breach by enabling frequent backups and recovery processes to mitigate data loss and future damage. Planning for disaster recovery in an incident response plan can ensure a quick and optimal recovery point, while allowing you to troubleshoot issues and prevent them from occurring again. Not every security incident will lead to a disaster recovery scenario, but it’s certainly a good idea to have a BDR solution in place if it’s needed.

Step 5: Run a Fire Drill

Once you’ve completed these first four steps of building an incident response plan, it’s vital that you test it. Put your team through a practice “fire drill.” When your drill (or incident) kicks off, your communications tree should go into effect, starting with notifying the PR, legal, executive leadership, and other teams that there is an incident in play. As it progresses, the incident response manager will make periodic reports to the entire group of stakeholders to establish how you will notify your customers, regulators, partners, and law enforcement, if necessary. Remember that, depending on the client’s industry, notifying the authorities and/or forensics activities may be a legal requirement. It’s important that the response team takes this seriously, because it will help you identify what works and which areas need improvement to optimize your plan for a real scenario.

Step 6: Plan for Debriefing

Lastly, you should come full circle with a debriefing. During a real security incident, this step should focus on dealing with the aftermath and identifying areas for continuous improvement. Take is this opportunity for your team to tackle items such as filling out an incident report, completing a gap analysis with the full team,  and keeping tabs on post-incident activity.

No company wants to go through a data breach, but it’s essential to plan for one. With these six steps, you and your clients will be well-equipped to face disaster, handle it when it happens, and learn all that you can to adapt for the future.


This article was provided by our service partner : Webroot

 

veeam

Veeam Availability Console U1 is now available

Managed service providers (MSPs) are playing an increasingly critical role in helping businesses of all sizes realize their digital transformation aspirations. The extensive offerings made available to businesses continue to allow them to shift day-to-day management onto you, the MSP, while allowing them to focus on more strategic initiatives. One of the most notable services being backup and recovery.

We introduced Veeam Availability Console in November 2017, a FREE, cloud-enabled management platform built specifically for service providers. Through this console, service providers can remotely manage and monitor the Availability of their customer’s virtual, physical and cloud-based workloads protected by Veeam solutions with ease. And, in just a few short months, we’ve seen incredible adoption across our global Veeam Cloud & Service Provider (VCSP) partner base, with overwhelmingly positive feedback.

Today, I’m happy to announce the General Availability (GA) of Veeam Availability Console U1, bringing with it some of the most hotly requested features to help further address the needs of your service provider business.

Enhanced Veeam Agent support

The initial release of Veeam Availability Console was capable of monitoring Veeam Agents deployed and managed by the service provider through Veeam Availability Console. New to U1 is the ability to achieve greater insights into your customer environments with new support that extends to monitoring and alarms for Veeam Agents that are managed by Veeam Backup & Replication. With this new capability, we’re enabling you to extend your monitoring services to even more Veeam customers that purchase their own Veeam Agents, but still want the expertise that you can bring to their business. And yes, this even includes monitoring support for Veeam Agent for Linux instances that are managed by Veeam Backup & Replication.

New user security group

VCSP partners wanting to delegate Veeam Availability Console access without granting complete control (like local administrator privileges) can now take advantage of the new operator role. This role permits access to everything within Veeam Availability Console essential to the remote monitoring and management of customer environments (you can even assign access to your employees on a company-by-company basis), but excludes access to Veeam Availability Console server configuration settings. Now you can assign access to Veeam Availability Console to your staff without exposing settings of the Veeam Availability Console server.

ConnectWise Manage integration

We’re introducing native integration with ConnectWise Manage. Through this new, seamless integration (available in the plugins library tab), the management, monitoring and billing of Veeam Availability Console-powered cloud backup and Disaster Recovery as a Service (DRaaS) can now be consolidated with your other managed service offerings into the single pane of glass that is ConnectWise Manage. This integration makes it easier and more efficient to expand your services portfolio while making administration of multiple, differing managed services much more efficient.

Matt Baldwin, President of Vertisys said, “This integration is exactly what my business needs to streamline our managed backup and DRaaS offering. The interface is clean and intuitive with just the right number of features. We project a yearly savings of 50 to 60 hours.”

Let’s take a closer look at some of the integration points between Veeam Availability Console and ConnectWise Manage.

Mapping companies

Firstly, the integration will help avoid a lot of manually intensive work by automatically synchronizing and mapping companies present in ConnectWise Manage with those in Veeam Availability Console. Automatic mapping is achieved through the company name. Before mapping is fully-complete, Veeam Availability Console allows you to check over what it’s automatically mapped before committing to the synchronization. If no match is found, mapping can be completed manually to an existing company or through the creation of a new company, with the option to send login credentials for the self-service customer portal, too.

Ticket creation

The integration also enables you to more quickly resolve issues before they impact your customers’ business through automatic ticket creation within ConnectWise Manage from Veeam Availability Console alarms. You can specify from the list of available alarms within Veeam Availability Console all those that are capable of triggering a ticket (e.g. failed backup, exceeding quota, etc.), and to which service board within ConnectWise Manage the ticket is posted. We’ve also enabled you with the capability to set delays (e.g. 1 minute, 5 minutes, 15 minutes, etc.) between the alarm occurring and the ticket posting, so issues like a temporary connectivity loss that self-resolves doesn’t trigger a ticket immediately. Every ticket created in ConnectWise Manage is automatically bundled with the corresponding configuration, such as representing a computer managed by Veeam Availability Console. This makes it incredibly easy for support engineers to find which component failed and where to go fix it. The integration also works in reverse, so that when tickets are closed within ConnectWise Manage, the corresponding alarm in Veeam Availability Console will be resolved.

Billing

The final part of the integration extends to billing, reducing complexities for you and your customers by consolidating invoices for all the managed services in your portfolio connected to ConnectWise Manage into a single bill. Not only this, but the integration allows for the automatic creation of new products in ConnectWise Manage, or mapping to existing ones. Service providers can select which agreement Veeam Availability Console-powered services should be added to on a per-customer basis, with agreements updated automatically based on activity, quota usage, etc.

Enhanced scalability

Finally, we’ve enhanced the scalability potential of Veeam Availability Console, enabling you to deliver your services to even more customers. The scalability improvements specifically align to the supported number of managed Veeam Backup & Replication servers, and this is especially useful when paired with the enhanced Veeam Agent support discussed earlier. This ensures optimal operation and performance when managing up to 10,000 Veeam Agents and up to 600 Veeam Backup & Replication servers, protecting 150-200 VMs and Veeam Agents each.


This article was provided by our service partner : veeam.com

3 MSP Best Practices for Protecting Users

Cyberattacks are on the rise, with UK firms being hit, on average, by over 230,000 attacks in 2017. Managed service providers (MSPs) need to make security a priority in 2018, or they will risk souring their relationships with clients. By following 3 simple MSP best practices consisting of user education, backup and recovery, and patch management, your MSP can enhance security, mitigate overall client risk, and grow revenue.

User Education

An effective anti-virus is essential to keeping businesses safe; however, It isn’t enough anymore. Educating end users through security awareness training can reduce the cost and impact of user-generated infections and breaches, while also helping clients meet the EU’s new GDPR compliance requirements. Cybercriminals’ tactics are evolving and increasingly relying on user error to circumvent security protocols. Targeting businesses through end users via social engineering is a rising favorite among new methods of attack.

Common social engineering attacks include:

  • An email from a trusted friend, colleague or contact—whose account has been compromised—containing a compelling story with a malicious link/download is very popular. For example, a managing director’s email gets hacked and the finance department receives an email to pay an outstanding “invoice”.
  • A phishing email, comment, or text message that appears to come from a legitimate company or institution. The messages may ask you to donate to charity, ‘verify’ information, or notify you that you’re the winner in a competition you never entered.
  • A fraudster leaving a USB around a company’s premises hoping a curious employee will insert it into a computer providing access to company data.

Highly topical, relevant, and timely real-life educational content can minimize the impact of security breaches caused by user error. By training clients on social engineering and other topics including ransomware, email, passwords, and data protection, you can help foster a culture of security while adding serious value for your clients.

Backup and Disaster Recovery Plans

It’s important for your MSP to stress the importance of backups. If hit with ransomware without a secure backup, clients face the unsavory options of either paying up or losing important data. Offering clients automated, cloud-based backup makes it virtually impossible to infect backup data and provides additional benefits, like a simplified backup process, offsite data storage, and anytime/anywhere access. In the case of a disaster, there should be a recovery plan in place. Even the most secure systems can be infiltrated. Build your plan around business-critical data, a disaster recovery timeline, and protocol for disaster communications.

Things to consider for your disaster communications
  • Who declares the disaster?
  • How are employees informed?
  • How will you communicate with customers?

Once a plan is in place, it is important to monitor and test that it has been implemented effectively. A common failure with a company’s backup strategy occurs when companies fail to test their backups. Then, disaster strikes and only then do they discover they cannot restore their data. A disaster recovery plan should be tested regularly and updated as needed. Once a plan is developed, it doesn’t mean that it’s effective or set in stone.

Patch Management

Consider it an iron law; patch and update everything immediately following a release. As soon as patches/updates are released and tested, they should be applied for maximum protection. The vast majority of updates are security related and need to be kept up-to-date. Outdated technology–especially an operating system (OS)–is one of the most common weaknesses exploited in a cyberattack. Without updates, you leave browsers and other software open to ransomware and exploit kits. By staying on top of OS updates, you can prevent extremely costly cyberattacks. For example, in 2017 Windows 10 saw only 15% of total files deemed to be malware, while Windows 7 saw 63%. These figures and more can be found in Webroot’s 2018 Threat Report.

Patching Process

Patching is a never-ending cycle, and it’s good practice to audit your existing environment by creating a complete inventory of all production systems used. Remember to standardize systems to use the same operating systems and application software. This makes the patching process easier. Additionally, assess vulnerabilities against inventory/control lists by separating the vulnerabilities that affect your systems from those that don’t. This will make it easier for your business to classify and prioritize vulnerabilities, as each risk should be assessed by the likelihood of the threat occurring, the level of vulnerability, and the cost of recovery. Once it’s determined which vulnerabilities are of the highest importance, develop and test the patch. The patch should then deploy without disrupting uptime—an automated patch system can help with the process.

Follow these best practices and your MSP can go a lot further toward delivering the security that your customers increasingly need and demand. Not only you improve customer relationships, but you’ll also position your MSP as a higher-value player in the market, ultimately fueling growth. Security is truly an investment MSPs with an eye toward growth can’t afford to ignore.


This article was provided by our service partner : Webroot

Technology Teams

Defining the Value of Technology Teams

Technology Teams are made up of a lot more than just the service technicians working with your customers. Every Technology Team is made up of a combination of people that account for every step of the Customer Journey. Sales, finance, even marketing…they’re all a part of your Technology Teams and enable you to reach your clients, making their jobs and lives a little easier and helping you stay ahead of technology.

Technology Teams are formed to deliver a unique set of solutions and services. Within one company, multiple Technology Teams can combine to form a resilient Technology Organization. ConnectWise provides a tailored experience to fit the customer journey by turning the ConnectWise suite into a platform of microservices. Building on the foundation of the Solutions Menu, we will focus on Technology Workers.

Building Value

As a business with your sights set on current and future success, you have to find ways to build resiliency into your business. A key way to do this is by building out multiple Technology Teams to continuously increase and diversify the value you offer to your clients. The more you can do to cover their needs, now and into the future, the more you’ll be able to serve the needs of your current customers and attract new ones.

Get Specialized

So why not just have one big team in your company, with every resource managing all of the information they need for each customer’s needs? Every Technology Team is going to have a unique approach to solving customer problems, whether in sales, services or billing, and you’ll want to have people dedicated to making sure those unique approaches are supported. Instead of overwhelming your team with the heavy load of understanding everything about every one of your customers.

No one can be a master of everything, so allow your Technology Teams to focus only on expertise in their specific area. By dividing your business efforts to focus on each specific Technology Team, you’ll be more efficient, your team will feel more in control, and your customers will feel like you really understand their needs.

Take the Lead

Once your Technology Teams are leading the way in meeting your customers’ varied—and growing—needs, they’ll be responsible for guiding your customers through every part of the customer journey.

Mastering each step of the customer journey for each Technology Team enables them to provide excellent customer service, laying the groundwork for long-term relationships that keep your customers happy and loyal.

Where to Start

Fortunately, you’re probably already doing this without realizing it. Do you have a list of services you offer? Those probably line up pretty nicely to some of the Technology Teams already. Now you’ll just need to conduct a gap analysis to find out what you’ve got covered and what still needs to have resources put toward it.

A gap analysis looks at your current performance to help you pinpoint the difference between your current and ideal states of business. Get started by answering these three deceptively simple questions with input from your team:

  • Where are we now?
  • Where do we want to be?
  • How do we get there (close the gap)?

Keep working toward full coverage for every Technology Team your customers are looking for, and seeing every client through the steps of the customer journey and before long, you’ll be meeting and exceeding your business goals.


This article was provided by our service partner : Connectwise