Managed Voip

Considerations when Picking a Managed VoIP PBX

Not all things are created equal, and when considering a new phone system for your business, not all Cloud Based Managed VoIP Providers are the same. Before you sign a contract, there can be huge differences among Hosted VoIP Providers.

Features – What features are most important to your business? Does your business need auto attendant, voicemail sent to an email, mobile twinning (sending calls to both a cell phone and desk phone at the same time)? Does the receptionist want to see who is on the phone? How about the ability of having “hot desking”, the ability of logging into anyone’s phone and have it appear as your own. This feature works great for medical practices who have rotating staff working the front desk. Don’t forget to ask office workers what features they could use.

Equipment – What about the brand of phones that are used? Is the equipment proprietary or can it be used with other Managed VoIP services providers? Should you purchase the equipment or rent each handset and what are the advantages? Make sure you are getting quality VoIP phones from a quality manufacture or have them outsourced to australian voip phones who will take care of your communication needs at your business and help you improve the way your team connects with each other, the last thing you want to happen is finding out the phones you bought are not good quality. Does each user on the system need a fancy phone with lots of features, most employees only use two or three features. Do you really need a cool looking conference room phone or will a basic handset do the trick? Many newer phones have excellent speaker phones, so a basic handset may work fine. A good provider should be able to offer multiple phone options as your business grows and expands as mobile command centers explained by Connected Solutions Group.

Pricing – Many providers offer confusing or different pricing options. Some offer unlimited options that may be simple to understand but you pay for features not needed. Another consideration is whether to rent or buy phones. With some customers it makes sense to buy, but what happens when the phone breaks, who is responsible? The cost of renting phones has dropped dramatically, however pricing and features vary greatly. Make sure you understand how the companies long distance calling is priced; contrary to what many believe, Hosted VoIP is not free phone service. You can also get VoIP services and learn more about the healthcare solutions and its benefits in this field.

Call Quality – This is where customers get burned and have poor VoIP call quality and get disappointed. It is important to know the difference between BYOY (Bring you own Bandwidth) compared to “managed VoIP” using a private MPLS data network. Some carriers provide an extra layer of call quality by using a managed router. Make sure you know the difference between managed and unmanaged services, there can be a big difference in call quality.

Vendor Experience – This is one of the most important considerations when considering a Managed VoIP phone system. VoIP (Voice Over Internet Protocol) has been around for many years and many service providers are now selling Hosted VoIP via the internet, out of car trunks, basements and garages. It would be disastrous for a business if the phone company went out of business and had control of your phone numbers? Make sure you find out how long the Hosted PBX provider has been in business, how many customers they support and the types of customers.

Disaster Recovery – It is very important to make sure you understand the providers network and how many POP’s (point-of-presence) they own and manage. Does the hosted PBX provider have built in intelligence that can determine when a business’s on-site phones stop working and can they re-rout calls to different numbers? How many network operation centers does the provider have, east and west coast only?

Summary – Managed VoIP PBX offers advanced features previously only available to much larger business all for a great value. Hosted or Cloud PBX phone service compared to traditional solutions offers no-hassle phone service without ongoing maintenance, service contracts, costly hardware and onsite trip charges. While a hosted PBX offers customers ease of management; an onsite or Premise PBX is can still be a more cost efficient solution.

Internet Security

Report Uncovers Cloud Security Concerns and Lack of Security Expertise Slows Cloud Adoption

Crowd Research Partners yesterday (28th March 2017) released the results of its 2017 Cloud Security Report revealing that security concerns, lack of qualified security staff and outdated security tools remain the top issues keeping cyber security professionals up at night, while data breaches are at an all-time high.

Based on a comprehensive online survey of over 1,900 cyber security professionals in the 350,000-member Information Security Community on LinkedIn, the report has been produced in conjunction with leading cloud security vendors AlienVault, Bitglass, CloudPassage, Cloudvisory, Dome9 Security, Eastwind Networks, Evident.io, (ISC)2, Quest, Skyhigh, and Tenable.

“While workloads continue to move rapidly into the cloud, security concerns remain very high,” said Holger Schulze, founder of the 350,000-member Information Security Community on LinkedIn. “With a third of organizations predicting cloud security budgets to increase, today’s cloud environments require more than ever security-trained, certified professionals and innovative security tools to address the concerns of unauthorized access, data and privacy loss, and compliance in the cloud.”

Key takeaways from the report include:

  • Cloud security concerns top the list of barriers to faster cloud adoption. Concerns include protection against data loss (57 percent), threats to data privacy (49 percent), and breaches of confidentiality (47 percent).
  • Lack of qualified security staff is the second biggest barrier to cloud adoption, and more than half of organizations (53 percent) are looking to train and certify their current IT staff to address the shortage, followed by partnering with a managed service provider (MSP) (30 percent), leveraging software solutions (27 percent), and hiring dedicated staff (26 percent).
  • As more workloads move to the cloud, organizations are realizing that traditional security tools are not designed for the unique challenges cloud adoption presents (78 percent). Instead, strong security management and control solutions designed specifically for the cloud are required to protect the new, agile paradigm.
  • Visibility into cloud infrastructure is the single biggest security management headache for 37 percent of respondents, moving up to the top spot from being the second ranking operational concern in the previous year.

Download the complete 2017 Cloud Security Report here.

Linux Patch Management

The Importance of Linux Patch Management

In recent news there have been a number of serious vulnerabilities found in various Linux systems. Whilst OS vulnerabilities are a common occurrence, it’s the nature of these that have garnered so much interest. Linux patch management should be considered a priority in ensuring the security of your systems.

The open-source Linux operating system is used by most of the servers on the internet as well as in smartphones, with an ever-growing desktop user base as well.

Open-source software is typically considered to increase the security of an operating system, since anyone can read, re-use and suggest modifications to the source code – part of the idea being that many people involved would increase the chances of someone finding and hopefully fixing any bugs.

With that in mind let’s turn our sights on the bug known as Dirty Cow (CVE-2016-5195) found in October – named as such since it exploits a mechanism called “copy-on-write” and falls within the class of vulnerabilities known as privilege escalation. This would allow an attacker to effectively take control of the system.

What makes this particular vulnerability so concerning however isn’t the fact that it’s a privilege escalation bug, but rather that it was introduced into the kernel around nine years ago. Exploits already taking advantage of Dirty Cow were also found after the discovery of the bug by Phil Oester. This means that a reliable means of exploitation is readily available, and due to its age, it will be applicable to millions of systems.

Whilst Red Hat, Debian and Ubuntu have already released patches, millions of other devices are still vulnerable – worse still is the fact that between embedded versions of the operating and older Android devices, there are difficulties in applying the updates, or they may not receive any at all, leaving them vulnerable.

Next, let’s have a look at a more recent vulnerability which was found in Cryptsetup (CVE-2016-4484), which is used to set up encrypted partitions on Linux using LUKS (Linux Unified Key Setup). It allows an attacker to obtain a root initramfs shell on affected systems. At this point, depending on the system in question, it could be used for a number of exploitation strategies according to the researchers whom discovered the bug, namely:

  • Privilege escalation: if the boot partition is not encrypted:
    — It can be used to store an executable file with the bit “SetUID” enabled. Which can later be used to escalate privileges by a local user.
    — If the boot is not secured, then it would be possible to replace the kernel and the initrd image.
  • Information disclosure: It is possible to access all the disks. Although the system partition is encrypted it can be copied to an external device, where it can be later be brute forced. Obviously, it is possible to access to non-encrypted information in other devices.
  • Denial of service: The attacker can delete the information on all the disks, causing downtime of the system in question.

Whilst many believe the severity and/or likely impact of this vulnerability has been exaggerated considering you need physical or remote console access (which many cloud platforms provide these days), what makes it so interesting is just how it is exploited.

All you need to do is repeatedly hit the Enter key at the LUKS password prompt until a shell appears (approximately 70 seconds later) – the vulnerability is as a result of incorrect handling of password retries once the user exceeds the maximum number (by default 3).

The researchers also made several notes regarding physical access and explained why this and similar vulnerabilities remain of concern. It’s generally accepted that once an attacker has physical access to a computer, it’s pwned. However, they highlighted that with the use of technology today, there are many levels of what can be referred to as physical access, namely:

  • Access to components within a computer – where an attacker can remove/replace/insert anything including disks, RAM etc. like your own computer
  • Access to all interfaces – where an attacker can plug in any devices including USB, Ethernet, Firewire etc. such as computers used in public facilities like libraries and internet cafes.
  • Access to front interfaces – usually USB and the keyboard, such as systems used to print photos.
  • Access to a limited keyboard or other interface – like a smart doorbell, alarm, fridge, ATM etc.

Their point is that the risks are not limited to traditional computer systems, and that the growing trends around IoT devices will increase the potential reach of similar attacks – look no further than our last article on DDoS attacks since IoT devices like printers, IP cameras and routers have been used for some of the largest DDoS attacks ever recorded.

This brings us back around to the fact that now, more than ever, it’s of critical importance that you keep an eye on your systems and ensure any vulnerabilities are patched accordingly, and more importantly – in a timely manner. Linux patch management should be a core consideration for all IT systems, whether they are servers or workstations, and of course regardless of the operating systems used.

This article was provided by our service partner ESET

managed storage

Advantages of Managed Storage for Business

Cloud service and managed storage providers offer valuable IT solutions for businesses of all sizes. Originally thought of as more for personal and less for business, cloud and managed storage for business is following in the footsteps of many personal tech. adapted for business (you can check this lead conversion squared review to get started on that). Many businesses can benefit from comprehensive cloud services – hosted applications, Infrastructure as a Service and more – and the transition often begins with data storage needs.

COST SAVINGS

The first benefit, and perhaps most important in the minds of many business owners, is the cost advantage. Cloud storage is generally more affordable because providers distribute the costs of their infrastructure and services across many businesses.

Moving your business to the could eliminates the cost of hardware and maintenance. Removing these capital expenditures and the associated service salaries from your technology expenses can translate into significant cost savings and increased productivity.

SIMPLIFIED CONVENIENCE

All you will need within your office is a computer and an internet connection. Much of your server hardware will no longer be necessary, not only saving you physical space but eliminating the need for maintenance and employee attention. Your managed storage provider will maintain, manage and support your business. This frees up employees who would otherwise cover the tasks necessary for keeping your data safe and your server(s) up and running.

ENHANCED SECURITY

Instead of having hardware within your office, cloud storage is housed in a data center, providing enterprise level security, which is cost prohibitive for most individual businesses. There is also no single point of failure in the cloud because your data is backed up to multiple servers. This means that if one server crashes, your data stays safe because it is stored in other locations. The potential risk of hardware malfunction minimizes because your data is safely stored in redundant locations.

MOBILITY OPPORTUNITIES

The mobility benefits provided by the cloud are rapidly increasing for businesses of all sizes. In today’s world of connectivity, we are able to work (and play) whenever and wherever. Platforms like WhatsApp, that is verified by the WhatsApp Green Tick Mark and other platforms have made communication easy. While you’re waiting for a flight at the airport or at home with a sick child, you can still work – and work efficiently. Before cloud storage came along, working outside the office was problematic and more time consuming than it needed to be. Remember having to save your files on your laptop and then returning to work and needing to transfer your updated files to ensure others have access to the latest version?

This example highlights another one of the advantages of cloud storage and VOIP phones Springfield experts told us about – enabling mobility. If you work from multiple devices – i.e. phone, tablet and desktop computer – you won’t have to worry about manually adding the latest file onto each device. Instead, the newest version of your document is stored in the cloud and will be easily accessible from any of your devices.

SCALABLE SERVICE

With cloud storage, you pay for what you use, as you use it. You do not need to anticipate how much storage space you will need for the year and risk paying for unused space or running short. You can adjust the resources available through cloud storage providers and pay based on your current needs, modifying as they change.

veeam

Five considerations when searching for an off-site backup solution

For a number of years now, Veeam has been talking about the 3-2-1 rule of backups, whereby you keep three copies of your backup data on two different media types with at least one of those backups held off-site. Traditionally, most organizations have been able to put this into play by taking advantage of on-premises storage and media hardware along with multiple data center locations to cater for the off-site backup solution. This is where off-site data backup services can come into play to satisfy the off-site backup services requirement.

 
Off-site backup solutions offer numerous benefits to organizations, including increased efficiency and reliability based upon features and capabilities that not many companies may afford. There’s also no need to worry about infrastructure maintenance as that burden lies with the service provider, and the scalability of service providers can be leveraged without an upfront CAPEX spend. Another advantage of off-site backup solutions is accessibility, as the data is accessible from any internet-connected location and device.

 

Since Veeam Backup & Replication v8, Veeam has offered Cloud Connect as a means for the Veeam Cloud & Service Provider (VCSP) partners to provide off-site data backup services. With Veeam Cloud Connect, they can give their customers the ability to leverage cloud repositories to store virtual machines in service provider facilities. By leveraging Veeam Cloud Connect Backup, a number of VCSPs around the world have built off-site backup solutions. The Veeam Cloud & Service Provider directory lists out VCSP partners in your region of choice… but how do you choose between them?

 
Below are five considerations when searching for an offsite backup solution:

1. Data locality and Availability

Data sovereignty is a still a major concern for organizations looking to back up off site to the cloud. With the VCSP network being global, there is no shortage of locations to choose from to have as an off-site repository. Drilling down even further, some providers offer multiple locations within region, which can increase the resiliency and Availability of off-site backups and let you choose multiple repositories to further extend the 3-2-1 rule. It’s also a good idea to do some research into the service providers uptime and major event history, as this can tell you either way if a provider offering the off-site backup service has had any history of Availability issues.

2. Recoverability and restore times
It’s hard to defeat the laws of physics, and in searching for an off-site backup solution you should think about how long the data you have in a cloud repository will take to restore. This goes beyond the basics of working out recovery time objectives (RTOs) in that taking backups off site means that you are at the mercy of the internet connection between you and the restore location and in the restore capabilities of the service provider. When looking for a suitable off-site backup solution, take into consideration the roundtrip time between yourself and the service provider network and also the throughput between the two sites making sure you test both, upload and download speeds to and from each end.
Note that Veeam-powered off-site backup services can improve recovery times compared to those that rely on tape-based backup due to Cloud Connect repositories at the service provider end being housed on physical disk.

3. Service provider certifications and SLAs

As with data locality, more and more organizations are looking for offsite backup solutions that meet or match their own certification requirements. This extends beyond more common data center standards such as ISO 9001 and 27001, but also now looks at more advanced regulatory compliance to do with data retention and goes as far as service providers abiding by strict security standards. If your organization is in a specific vertical, such as Healthcare’s HIPAA standard, then you may look for an off-site backup solution that is compatible with that.
It’s also worth noting that service providers will offer differing service level agreements (SLAs) and this should be taken on board when searching for an off-site backup service. SLAs dictate the level of responsibility a service provider has when it comes to keeping to their promises in terms of services offered. In the case of off-site backup, it’s important to understand what is in place when it comes to integrity and security of data and what is done to guarantee access to your data when required.

4. Hypervisor support

Multi-hypervisor support does come into play when looking forward towards extending off-site backup and looking at recoverability in the cloud. For example, Veeam Cloud Connect works with both VMware and Microsoft hypervisors, and VCSPs have the ability to offer one or both of these platforms from a replication point of view. However, with Cloud Connect Backup, the off-site backup repository is hypervisor agnostic; cloud repository is acting as a simple remote storage option for organizations to back up to. With Veeam Backup & Replication 9.5, you can now replicate from Cloud Connect Backups and choose a provider that has one or the other, or both hypervisors as platform options.

5. Cost

Cost might seem obvious, but given the variety or services offered through the service providers it’s important to understand the difference in pricing models. Some service providers are pure infrastructure providers (IaaS) offering Backup as a Service (BaaS), which means you are generally paying for a VM license, storage and there might be additional charges for data transfer (however, this is fairly rare in the IaaS space). These service providers don’t cover any management of the backups — generally this is handled by managed service providers that wrap service charges on top of the infrastructure charges offering end-to-end off-site backup solutions.

The five tips above should help you in searching for an off-site backup service. You need to remember that each service provider offers something slightly different, which means your organization has choice in terms of matching an off-site data backup service that suits your specific requirements and needs. My recommendations will also help you navigate through Veeam Cloud & Service Provider partners that leverage Veeam Cloud Connect for their off-site backup offerings.


This article was provided by our service partner. Veeam

Network Security : OpenDNS

Why Firewalls and Antivirus are not enough in our fight for the best network security ?

Understanding Malicious Attacks to Stay One Step Ahead

Network (firewall) and endpoint (antivirus) defenses react to malicious communications and code after attacks have been launched. OpenDNS observes Internet infrastructure before attacks are launched and prevent those malicious internet connections happening in the first first. Learning all the steps of an attack is key to understanding how OpenDNS can bolster your existing defenses.

Each step of the attackers operation provides an opportunity for network security providers to find the attack and defend the intrusion.

Network security - Example malware attacks

High level summary of how attacks are laid out. 

—> RECON: Many reconnaissance activities are used to learn about the attack target
—> STAGE: Multiple kits or custom code is used to build payloads. And multiple networks and systems are staged to host initial payloads, malware drop hosts, and botnet controllers
—> LAUNCH: Various Web and email techniques are used to launch the attack
—> EXPLOIT: Both zero-day and known vulnerabilities are exploited or users are tricked
—> INSTALL: Usually the initial payload connects to another host to install specific malware
—> CALLBACK : Nearly every time the compromised system callbacks to a botnet server
—> PERSIST : Finally, a variety of techniques are used to repeat through steps 4 to 7

We do not have to understand each tool and technique that attackers develop. The takeaway from this is to simply understand how multiple and often repeated, steps are necessary for attackers to achieve their objectives undermining your existing network security tools.

Compromises happen in seconds. Breaches start minutes later and can continue undetected for months. Operating in a state of continuous compromise may be normal for many. but no one should accept a state of persistent breach.

Existing defenses cannot block all attacks. 

Firewalls and AntiVirus stop many attacks during several steps of the ‘kill chain’, but the volume and velocity of new attack tools and techniques enable some to go undetected for minutes or even months.

Network security - Firewall AntiVirus view of malware attacks

Firewalls know whether the IP of a network connection matches a blacklist or reputation feed. Yet, providers must wait until an attack is launched before collecting and analyzing a copy of the traffic. Then, the provider will gain intelligence of the infrastructure used.

Antivirus solutions know whether the hash of the payload match a signature database or heuristic. Yet providers must wait until a system is exploited before collecting and and analyzing a sample of the code before gaining intelligence about the payload used.

The OpenDNS Solution

Stop 50 to 98 percent more attacks than firewalls and antivirus alone by pointing your DNS traffic to OpenDNS.
OpenDNS does not wait until after attacks launch, malware install, or infected systems callback to learn how to defend against attack. By analyzing a cross-section of the world’s Internet activity, OpenDNS continuously observe new relationships forming between domain names, IP addresses, and autonomous system numbers (ASNs). This visibility enables us to discover, and often predict, where attacks are staged and will emerge before they even launch.

Network security - OpenDNS view of malware attacks

Why keep firewalls and antivirus at all?

Once we prove our effectiveness, we are often asked: “can we get rid of our firewall or antivirus solutions?” While these existing defenses cannot stop every attack, they are still useful—if not critical—in defending against multi-step attacks. A big reason is threats never expire—every piece of malware ever created is still circulating online or offline. Signature-based solutions are still effective at preventing most known threats from infecting your systems no matter by which vector it arrives: email, website or thumbdrive. And firewalls are effective at defending both within and at the perimeter of your network. They can detect recon activities such as IP or port scans, deny lateral movements by segmenting the network, and enforce access control lists.

“One of AV’s biggest downfalls is the fact that it is reactive in nature; accuracy is heavily dependent on whether the vendor has already seen the threat in the past. Heuristics or behavioral analysis can sometimes identify new malware, but this is still not adequate because even the very best engines are still not able to catch all zero-day malware.”

Your Solution:
Re-balance investment of existing versus new defenses:
Here are a couple examples of how many customers free up budget for new defenses.

• Site-based Microsoft licenses entitle customers to signature-based protection at no extra cost. Microsoft may not be the #1 ranked product, but it offers good protection against known threats. OpenDNS defends against both known and emergent threats.

• NSS Labs reports that SSL decryption degrades network performance by 80%, on average. OpenDNS blocks malicious HTTPS-based connections by defending against attacks over any port or protocol. By avoiding decryption, appliance lifespans can be greatly extended.

“OpenDNS provides a cloud-delivered network security service that blocks advanced attacks, as well as malware, botnets and phishing threats regardless of port, protocol or application. Their predictive intelligence uses machine learning to automate protection against emergent threats before your organization is attacked. OpenDNS protects all your devices globally without hardware to install or software to maintain.”

Managed Security Services

Managed Security Services

“The Internet of Things is the biggest game changer for the future of security,” emphasizes David Bennett, vice president of Worldwide Consumer and SMB Sales at Webroot. “We have to figure out how to deal with smart TVs, printers, thermostats and household appliances, all with Internet connectivity, which all represent potential security exposures.”

Simply put, the days of waiting for an attack to happen, mitigating its impact and then cleaning up the mess afterward are gone. Nor is it practical to just lock the virtual door with a firewall and hope nothing gets in–the stakes are too high. The goal instead must be to predict potential exposure, and that requires comprehensive efforts to gather threat intelligence. According to Bennett, such efforts should be:

  • Real time: Because the velocity and volume of threats increases on a daily basis, the technologies used to protect systems must be updated by the minute. The ability to adjust to the nature and type of new threats as they appear is key. Data should be aggregated from sources globally and delivered as actionable information to the security professional.
  • Contextual: Data must be parsed through sophisticated computer analytics to ensure humans can make decisions based on actionable intelligence. An analyst has to be given data with pre-connected dots in order to act quickly. There’s little time for onsite security professionals to analyze reams of data when they suspect an attack is underway. By the time they figure out what’s going on, the damage is done.
  • Big data-driven: It’s not enough for a company to understand only what’s happening in its own environment; an attack on one of its competitors or peers could mean it’s next. To analyze complex threat patterns, threat intelligence technology must be cloud-based and should aggregate activities from across companies and across geographies.
    “Security professionals of the future must act like intelligence officers or analysts,” Bennett notes. “They have to consume information that’s already been parsed for them, and make decisions based on that intelligence. Success will depend on how they are fed the data. How is it presented? Is it relevant? Have the irrelevant data points already been removed? Only then will they be able to make decisions in time to prevent breaches.”

What This Means for MSPs

MSP services are particularly valuable to SMBs that lack the internal resources needed to effectively manage complex systems, or for any customers seeking to defer capital expenses in favor of leveraging their operational budgets. As such, cybersecurity is a perfect discipline to utilize the managed services model. “The biggest untapped opportunity for our partners today is providing security as a managed service,” observes Bennett. “Users are overwhelmed and just not capable of keeping on top of the rapid changes in the nature of threats.”

MSPs that offer managed security services address one of the major problems users face today: the lack of access to talented security professionals. Especially for SMB customers, finding and competing for talent with larger firms can be daunting. “Hiring and retaining the right personnel should not be a vulnerability in and of itself,” says Bennett. “Users who leverage managed security services remain protected through transitions in their IT staff and lower the risk of losing institutional knowledge critical to their security procedures. In addition, managed security services represents one of the largest and most profitable growth opportunities today for solution providers.”

MSPs that include Webroot SecureAnywhere Business Endpoint Protection solutions as part of their service offerings to clients are ideally positioned to take full advantage of these growth opportunities. In effect, Webroot technology gives MSPs their own dedicated security firm to monitor their customers’ environments. As Bennett explains, “We don’t just collect data—we scrub it, make correlations globally, and pass on exactly what our customers need to reduce exposures. It’s a big data approach to security, and it’s the only effective means to combat the ever-changing threats companies face.”

veeam

Veeam : Your Cloud backup customization option

Cloud backup is a viable option for many use cases, including but not limited to storage, critical workload management, disaster recovery and much more. And as we have covered in our previous concerns related to this series, it can also be made secure, reasonably priced, and migration can be simplified. We found one of the major cloud concerns in last year’s end user survey to be customization. Let’s dive into where customization and the cloud meet.

How customizable is the cloud?

In order to get the most out of their cloud investment, businesses need to be able to tailor the cloud to their exact needs. And even though cloud customization seems to be a concern, there is a general consensus in the IT community that the cloud is customizable. And when you consider the premise of AWS, Azure and other IaaS offerings that allow you to customize services specifically to your needs from day zero, it’s easy to see why. The cloud and customization seems to go hand-in-hand in some respects. Customization is a key component when it comes to the ability to configure cloud security. Being able to customize your cloud environment to meet exact compliance needs depending on what industry you are in, or in which region or country your data resides, makes customization a vital capability within cloud.

Supreme scalability of cloud

Talking about cloud customization would not be possible without also mentioning the flexibility and scalability that come with utilizing cloud over on-premises. If operations are conducted on-premises, then scaling up typically means buying new servers, and will require time and resources to deploy. The cloud offers pay-as-you go models and scaling happens instantly with no manual labor required. If there is a peak in activity, cloud resources can be added and scaled back down when business activity returns to normal. This ability to rapidly scale up or down through cloud can give a business true operational agility.

Customizing your backup data moving to the cloud

When depending on the data management software you use, you can enable a highly customized approach when it comes to handling data moving to the cloud. Veeam offers ultimate flexibility when it comes to the frequency, granularity and ease of backing up data to the cloud, helping you meet 15 minute RPOs which then impact RTOs. What’s great is the products used for backup and replication in Veeam can also be used as a migration tool to make the task of moving to cloud easier than it seemed at first. Let’s go over existing Veeam Cloud backup offerings and new ones to see how they can be utilized to customize various aspects of cloud backup needs.

Veeam and cloud customization

First and foremost: Backup and replication. The two functions used in virtually any environment to ensure the safety and redundancy of your data. You can send your data off site with Veeam Cloud Connect to a disaster recovery site or you can create an exact duplicate of your production environment that will have 15 minutes between them. And you can use these same options to get your data into the cloud, be it a cloud repository for storing backups or a secondary site via DRaaS, all within a single Veeam Backup & Replication console.

Since Veeam Cloud Connect operates through the network, we’ve made sure that we provide an encrypted traffic and built-in WAN acceleration to optimize every bit of data that is sent over. WAN acceleration minimizes the amount of data sent, excluding blocks that were already processed and can be taken from the cache on site. That comes really handy during migrations since you may be processing a lot of similar machines and files. This acceleration is included in Azure proxy as well as other optimizations that help reduce network traffic usage.

Additionally, you can use Direct Restore to Microsoft Azure to gain an extra level of recoverability. First setup and pre-allocate Azure services, then simply restore to any point of time for your machine in a couple of clicks. What’s really cool is that you’re not limited to restoring only virtual workloads, but can migrate physical machines as well!

The Veeam Agent for Microsoft Windows (beta version soon available), and the now available Veeam Agent for Linux will help you create backups of your physical servers so that you can store them on the Veeam repositories for further management, restores and migration, should you ever need to convert your physical workloads to the virtual and cloud. Not only does Veeam provide multiple means for getting data to the cloud, but you can also backup your Microsoft Office 365 data and migrate it to your local Exchange servers and vice versa with Veeam Backup for Microsoft Office 365! Many companies have moved their email infrastructure to the cloud, so Veeam provides an ability to have a backup plan in case something happens on the cloud side. That way you’ll always be able to retrieve deleted items and get access to your email infrastructure.

All these instruments are directly controlled by you, and most of them can be obtained with a service provider to take the management off your plate. When working with a provider, it is important to inquire into what can be customized or configured in order to ensure the cloud environment is able to meet your specific needs. This makes working with a cloud service provider a very valuable asset. As they can give you expert advice, reduce any complications and set expectations when it comes to cloud environments and their ability to be customized.


This article was provided by our service partner: Veeam

How Mobile Device Management Can Reduce Mobile Security Risks

Today’s modern workplace is home to users who carry their work and personal lives in their pockets. From smartphones to tablets, mobile devices keep us connected and always working. Users can work from anywhere, but that means opening the door to security threats if mobile devices aren’t properly protected. Mobile Device Management is service that help provide that protection.

The Bad News

Mobile security risks are real, and they are expanding every day. Public Wi-Fi networks open the door to hackers who can take advantage of security holes and access confidential company information stored on mobile devices. If a mobile device becomes infected with malware, the malware could spread through the entire network.

The portability of mobile devices means a greater risk for loss and theft. When unprotected devices disappear, they put access to sensitive business information in unauthorized hands. No business wants to worry about the repercussions of outside access to proprietary information. Just picture the headlines: CEO’s Lost iPhone Leads to Customer Data Breach.

The Good News

Mobile device management (MDM) solutions can help protect against the threats that are out there. Mobile Device Management helps you make sure critical information is protected no matter how your clients’ employees access it.

MDM gives you the ability to enforce minimum security requirements on mobile devices that access your client networks, which helps protect against data compromise. Lost devices can be found with geo-location tracking. If they don’t turn up, the devices can be remotely wiped to protect data with a just a few mouse clicks. Security settings can be adapted to require passcodes, set a time before auto-lock, auto-wipe devices after a maximum number of failed login attempts, and more.

The point is, MDM keeps your clients’ networks better protected. The extra layer of data security gives your clients peace of mind and helps you maintain your role as a trusted advisor. With that in mind, what do you need to look for in an MDM solution?

If you really want to get the most from your MDM solution, look for one that’s going to work easily with your existing solutions. Integration with your remote monitoring and management (RMM) platform and other automation solutions will save you time in setup and implementation, and will enable your technicians to manage mobile devices through the same interface through which they’re already managing your clients’ computers.

In short, the right MDM solution means you’ll be better able to protect vital data from mobile security risks while keeping your clients’ users connected to the information they need to do their jobs.

Now you know what MDM can do to keep your clients safe from mobile threats. Check back next week for tips to help you explain the benefits of Mobile Device Management to your clients and make the sale.


This article was provided by our service partner Labtech.

freepbx

Set Up Extensions on a Cloud Based FreePBX

One of the best things about modern VoIP systems is how flexible they are when it comes to how you deploy them. You can use them on an appliance, virtualized, or on a cloud-based service like Amazon AWS, Google Cloud, or Microsoft Azure. Each configuration has a slightly different technique to making everything work, and one of the first challenges is registering extensions. For this post, we’ll focus on the general concepts of setting up extensions for a cloud based (hosted) solution with FreePBX.
If you’ve never heard of FreePBX, and you’re in the market for a new VoIP system, you should start doing a little research ( and also call VoIP Supply). To be brief, it’s a turn-key PBX solution that uses Asterisk, a free SIP based VoIP platform. Sangoma, the makers of FreePBX have created a web user interface for Asterisk to simplify configuration. They’ve also added an entire security architecture, and have added a lot of features above and beyond what pure Asterisk (no user interface) provides, such as Endpoint Manager, which is a way to centrally configure and manage IP Phones.

FreePBX isn’t the only product out there to do this, there’s quite a few out there actually, but FreePBX has really raised the bar in the past few years and has become a very series solution for the enterprise. Don’t let the word “Free” in FreePBX lead you to think it’s a cheaply created system.

 

FIRST, A LITTLE ABOUT VOIP CLOUD SECURITY:

There’s a huge benefit to hosting a VoIP system in the cloud, you have to deal with very little NAT. Why is that good? SIP and NAT generally do not cooperate with each other. It’s very common for SIP header information to be incorrect without a device such as a session border controller (SBC), or a SIP application layer gateway (SIP ALG). When deploying a system on premise, you will always need to port forward SIP (UDP 5060) and RTP ( UDP 10,000-20,000) at a minimum. Also, you’ll need to make sure these ports are open on your firewall. This helps direct SIP traffic to your phone system, similarly as if you had a web or mail server.

Of course, there are security concerns when exposing SIP directly to the internet, and the same concerns apply for a hosted system, but when dealing with a cloud solution, you are generally given a 1:1 (one to one) NAT from your external IP address to the VoIP system’s internal IP. A 1:1 NAT ensures all traffic is sent to the system without any additional rules. Some cloud services place an external IP address directly on your server, increasing simplicity.

If you’re reading this, and are becoming increasingly concerned, you’re not wrong. If you’re in the technology field, you’ve probably been taught that exposing any server directly to the internet is wrong, bad, horrible, and stupid. Generally speaking, that’s all correct, but luckily many cloud service providers will offer the ability to create access control lists to place in front of your server, like the one below from Microsoft Azure.

Cloud service Microsoft Azure

This gives you the ability to control access to specified ports, source, and destination IP addresses. Additionally, FreePBX has built in intrusion detection (Fail2Ban), and a responsive firewall, allowing you to further restrict access to ports and services. Is this hack proof? No, of course not. Nothing is hack proof, but I have run my personal FreePBX, exposed directly to the internet, with zero successful attacks. No, that’s not a challenge, and you can’t have my IP address. You can, however, have some of the would-be hacker’s IP’s (see below).

freepbx hackers ip

 

 

If you’d like to learn about the firewall that FreePBX has put together, go here. I’m not suggesting, that this is just as good as placing an on-prem VoIP system behind a hardware firewall, but the results so far are that it works very well. Using a cloud solution will always be at your own risk, so do plenty of testing and take whatever measures needed to secure your system (disclaimer).

 

SETTING UP (REMOTE) EXTENSIONS:

One of my favorite feature of a cloud based system is that all extensions are essentially remote extensions. This means you can place a phone anywhere in the world, in theory, with an internet connection, and place calls as if you were sitting in the office, or at home. There are some variables to this configuration, mainly restrictions on whatever network your phone is connected to, but generally speaking, it’s a useful and user-friendly solution. Now, for the rest of the article, I will assume that you know how to create an extension on FreePBX and have basic familiarity.

The first thing I typically do when deploying a new VoIP system is to define all of the network information for SIP. This is important for both cloud systems, and on-prem, Specifically, you need to tell FreePBX what networks are local, and which are not. To accomplish this, proceed to Settings > Asterisk SIP Settings, and define your external address, and local networks.

General-SIP-setting

 

 

Next, if you have your firewall turned on and you should make sure SIP is accessible. You’ll notice in the below image that the “Other” zone is selected, meaning I have defined specific networks that are allowed under Zones> Networks. To allow all SIP traffic, you can select “External,” but you would be better off enabling the Responsive Firewall, which rate limits all SIP registration attempts and will ban a host if a registration fails a handful of times.

CHAN_SIP

 

Also, something to pay attention to: Make sure you use the right port number. By default, PJSIP is enabled, and in use in FreePBX on port 5060 UDP. I will generally turn off PJSIP and re-assign 5060 USP to Chan SIIP. This can be adjusted under Settings > SIP Settings > Chen SIP Settings, and PJSIP Settings.

Bind-Port

 

Once the ports are re-assigned, you MUST reboot your system, or in the command line, run ‘fwconsole restart.’ I also like to tell FreePBX to use only Chan SIP. To do that, go to Settings > Advanced Settings > SIP Channel Driver = Chan SIP. PJSIP is perfectly funcitonal, but for now, I recommend you stick with CHAN SIP as PJSIP is still underdevelopment.

We should also assign the global device NAT setting to “Yes”. This will be the option used wheneber you create a new extension. Without making this the global default, you will have to make this change manually in each extension, when you’ll likely forget to do, and your remote extension will not register. This setting lets FreePBX know that it can expect the IP phone or endpoint to be external and likely behind a NAT firewall. To change this global setting, go to Settings > Advanced Settings > Device Settings > SIP NAT = Yes.

SIP-Nat

 

Lastly, make sure your extensions are using SIP, if you haven’t turned off PJSIP. You can convert extensions from one channel driver to the other within an extension’s settings.

SIP type

 

At this point, you should be able to register your remote extensions to your cloud based FreePBX system. If you are running into trouble, run through these troubleshooting steps:

  1. Check the firewall – Allowing SIP? Are you being blocked?
  2. Check Fail2Ban (Admin > System Admin > Intrusion Detection) Are you banned?
  3. Check that your networks are properly defined in SIP Settings
  4. Verify you are registering to the proper port
  5. Make sure the extension is using the proper protocol
  6. Debug the registration attempt in the command line – Authentication problem?

I hope this article sheds some light on the topic of cloud based VoIP systems, and how to set up extensions for that system. I also hope this saves you a few hours in troubleshooting if you are not well versed in FreePBX configuration. As a friendly reminder, before you make any changes to your production system, take a backup, or snapshot, and always test your changes.