One of the best things about modern VoIP systems is how flexible they are when it comes to how you deploy them. You can use them on an appliance, virtualized, or on a cloud-based service like Amazon AWS, Google Cloud, or Microsoft Azure. Each configuration has a slightly different technique to making everything work, and one of the first challenges is registering extensions. For this post, we’ll focus on the general concepts of setting up extensions for a cloud based (hosted) solution with FreePBX.
If you’ve never heard of FreePBX, and you’re in the market for a new VoIP system, you should start doing a little research ( and also call VoIP Supply). To be brief, it’s a turn-key PBX solution that uses Asterisk, a free SIP based VoIP platform. Sangoma, the makers of FreePBX have created a web user interface for Asterisk to simplify configuration. They’ve also added an entire security architecture, and have added a lot of features above and beyond what pure Asterisk (no user interface) provides, such as Endpoint Manager, which is a way to centrally configure and manage IP Phones.
FreePBX isn’t the only product out there to do this, there’s quite a few out there actually, but FreePBX has really raised the bar in the past few years and has become a very series solution for the enterprise. Don’t let the word “Free” in FreePBX lead you to think it’s a cheaply created system.
FIRST, A LITTLE ABOUT VOIP CLOUD SECURITY:
There’s a huge benefit to hosting a VoIP system in the cloud, you have to deal with very little NAT. Why is that good? SIP and NAT generally do not cooperate with each other. It’s very common for SIP header information to be incorrect without a device such as a session border controller (SBC), or a SIP application layer gateway (SIP ALG). When deploying a system on premise, you will always need to port forward SIP (UDP 5060) and RTP ( UDP 10,000-20,000) at a minimum. Also, you’ll need to make sure these ports are open on your firewall. This helps direct SIP traffic to your phone system, similarly as if you had a web or mail server.
Of course, there are security concerns when exposing SIP directly to the internet, and the same concerns apply for a hosted system, but when dealing with a cloud solution, you are generally given a 1:1 (one to one) NAT from your external IP address to the VoIP system’s internal IP. A 1:1 NAT ensures all traffic is sent to the system without any additional rules. Some cloud services place an external IP address directly on your server, increasing simplicity.
If you’re reading this, and are becoming increasingly concerned, you’re not wrong. If you’re in the technology field, you’ve probably been taught that exposing any server directly to the internet is wrong, bad, horrible, and stupid. Generally speaking, that’s all correct, but luckily many cloud service providers will offer the ability to create access control lists to place in front of your server, like the one below from Microsoft Azure.

This gives you the ability to control access to specified ports, source, and destination IP addresses. Additionally, FreePBX has built in intrusion detection (Fail2Ban), and a responsive firewall, allowing you to further restrict access to ports and services. Is this hack proof? No, of course not. Nothing is hack proof, but I have run my personal FreePBX, exposed directly to the internet, with zero successful attacks. No, that’s not a challenge, and you can’t have my IP address. You can, however, have some of the would-be hacker’s IP’s (see below).

If you’d like to learn about the firewall that FreePBX has put together, go here. I’m not suggesting, that this is just as good as placing an on-prem VoIP system behind a hardware firewall, but the results so far are that it works very well. Using a cloud solution will always be at your own risk, so do plenty of testing and take whatever measures needed to secure your system (disclaimer).
SETTING UP (REMOTE) EXTENSIONS:
One of my favorite feature of a cloud based system is that all extensions are essentially remote extensions. This means you can place a phone anywhere in the world, in theory, with an internet connection, and place calls as if you were sitting in the office, or at home. There are some variables to this configuration, mainly restrictions on whatever network your phone is connected to, but generally speaking, it’s a useful and user-friendly solution. Now, for the rest of the article, I will assume that you know how to create an extension on FreePBX and have basic familiarity.
The first thing I typically do when deploying a new VoIP system is to define all of the network information for SIP. This is important for both cloud systems, and on-prem, Specifically, you need to tell FreePBX what networks are local, and which are not. To accomplish this, proceed to Settings > Asterisk SIP Settings, and define your external address, and local networks.

Next, if you have your firewall turned on and you should make sure SIP is accessible. You’ll notice in the below image that the “Other” zone is selected, meaning I have defined specific networks that are allowed under Zones> Networks. To allow all SIP traffic, you can select “External,” but you would be better off enabling the Responsive Firewall, which rate limits all SIP registration attempts and will ban a host if a registration fails a handful of times.

Also, something to pay attention to: Make sure you use the right port number. By default, PJSIP is enabled, and in use in FreePBX on port 5060 UDP. I will generally turn off PJSIP and re-assign 5060 USP to Chan SIIP. This can be adjusted under Settings > SIP Settings > Chen SIP Settings, and PJSIP Settings.

Once the ports are re-assigned, you MUST reboot your system, or in the command line, run ‘fwconsole restart.’ I also like to tell FreePBX to use only Chan SIP. To do that, go to Settings > Advanced Settings > SIP Channel Driver = Chan SIP. PJSIP is perfectly funcitonal, but for now, I recommend you stick with CHAN SIP as PJSIP is still underdevelopment.
We should also assign the global device NAT setting to “Yes”. This will be the option used wheneber you create a new extension. Without making this the global default, you will have to make this change manually in each extension, when you’ll likely forget to do, and your remote extension will not register. This setting lets FreePBX know that it can expect the IP phone or endpoint to be external and likely behind a NAT firewall. To change this global setting, go to Settings > Advanced Settings > Device Settings > SIP NAT = Yes.

Lastly, make sure your extensions are using SIP, if you haven’t turned off PJSIP. You can convert extensions from one channel driver to the other within an extension’s settings.

At this point, you should be able to register your remote extensions to your cloud based FreePBX system. If you are running into trouble, run through these troubleshooting steps:
- Check the firewall – Allowing SIP? Are you being blocked?
- Check Fail2Ban (Admin > System Admin > Intrusion Detection) Are you banned?
- Check that your networks are properly defined in SIP Settings
- Verify you are registering to the proper port
- Make sure the extension is using the proper protocol
- Debug the registration attempt in the command line – Authentication problem?
I hope this article sheds some light on the topic of cloud based VoIP systems, and how to set up extensions for that system. I also hope this saves you a few hours in troubleshooting if you are not well versed in FreePBX configuration. As a friendly reminder, before you make any changes to your production system, take a backup, or snapshot, and always test your changes.
Veeam : Your Cloud backup customization option
Cloud backup is a viable option for many use cases, including but not limited to storage, critical workload management, disaster recovery and much more. And as we have covered in our previous concerns related to this series, it can also be made secure, reasonably priced, and migration can be simplified. We found one of the major cloud concerns in last year’s end user survey to be customization. Let’s dive into where customization and the cloud meet.
How customizable is the cloud?
In order to get the most out of their cloud investment, businesses need to be able to tailor the cloud to their exact needs. And even though cloud customization seems to be a concern, there is a general consensus in the IT community that the cloud is customizable. And when you consider the premise of AWS, Azure and other IaaS offerings that allow you to customize services specifically to your needs from day zero, it’s easy to see why. The cloud and customization seems to go hand-in-hand in some respects. Customization is a key component when it comes to the ability to configure cloud security. Being able to customize your cloud environment to meet exact compliance needs depending on what industry you are in, or in which region or country your data resides, makes customization a vital capability within cloud.
Supreme scalability of cloud
Talking about cloud customization would not be possible without also mentioning the flexibility and scalability that come with utilizing cloud over on-premises. If operations are conducted on-premises, then scaling up typically means buying new servers, and will require time and resources to deploy. The cloud offers pay-as-you go models and scaling happens instantly with no manual labor required. If there is a peak in activity, cloud resources can be added and scaled back down when business activity returns to normal. This ability to rapidly scale up or down through cloud can give a business true operational agility.
Customizing your backup data moving to the cloud
When depending on the data management software you use, you can enable a highly customized approach when it comes to handling data moving to the cloud. Veeam offers ultimate flexibility when it comes to the frequency, granularity and ease of backing up data to the cloud, helping you meet 15 minute RPOs which then impact RTOs. What’s great is the products used for backup and replication in Veeam can also be used as a migration tool to make the task of moving to cloud easier than it seemed at first. Let’s go over existing Veeam Cloud backup offerings and new ones to see how they can be utilized to customize various aspects of cloud backup needs.
Veeam and cloud customization
First and foremost: Backup and replication. The two functions used in virtually any environment to ensure the safety and redundancy of your data. You can send your data off site with Veeam Cloud Connect to a disaster recovery site or you can create an exact duplicate of your production environment that will have 15 minutes between them. And you can use these same options to get your data into the cloud, be it a cloud repository for storing backups or a secondary site via DRaaS, all within a single Veeam Backup & Replication console.
Since Veeam Cloud Connect operates through the network, we’ve made sure that we provide an encrypted traffic and built-in WAN acceleration to optimize every bit of data that is sent over. WAN acceleration minimizes the amount of data sent, excluding blocks that were already processed and can be taken from the cache on site. That comes really handy during migrations since you may be processing a lot of similar machines and files. This acceleration is included in Azure proxy as well as other optimizations that help reduce network traffic usage.
Additionally, you can use Direct Restore to Microsoft Azure to gain an extra level of recoverability. First setup and pre-allocate Azure services, then simply restore to any point of time for your machine in a couple of clicks. What’s really cool is that you’re not limited to restoring only virtual workloads, but can migrate physical machines as well!
The Veeam Agent for Microsoft Windows (beta version soon available), and the now available Veeam Agent for Linux will help you create backups of your physical servers so that you can store them on the Veeam repositories for further management, restores and migration, should you ever need to convert your physical workloads to the virtual and cloud. Not only does Veeam provide multiple means for getting data to the cloud, but you can also backup your Microsoft Office 365 data and migrate it to your local Exchange servers and vice versa with Veeam Backup for Microsoft Office 365! Many companies have moved their email infrastructure to the cloud, so Veeam provides an ability to have a backup plan in case something happens on the cloud side. That way you’ll always be able to retrieve deleted items and get access to your email infrastructure.
All these instruments are directly controlled by you, and most of them can be obtained with a service provider to take the management off your plate. When working with a provider, it is important to inquire into what can be customized or configured in order to ensure the cloud environment is able to meet your specific needs. This makes working with a cloud service provider a very valuable asset. As they can give you expert advice, reduce any complications and set expectations when it comes to cloud environments and their ability to be customized.
This article was provided by our service partner: Veeam
How Mobile Device Management Can Reduce Mobile Security Risks
Today’s modern workplace is home to users who carry their work and personal lives in their pockets. From smartphones to tablets, mobile devices keep us connected and always working. Users can work from anywhere, but that means opening the door to security threats if mobile devices aren’t properly protected. Mobile Device Management is service that help provide that protection.
The Bad News
Mobile security risks are real, and they are expanding every day. Public Wi-Fi networks open the door to hackers who can take advantage of security holes and access confidential company information stored on mobile devices. If a mobile device becomes infected with malware, the malware could spread through the entire network.
The portability of mobile devices means a greater risk for loss and theft. When unprotected devices disappear, they put access to sensitive business information in unauthorized hands. No business wants to worry about the repercussions of outside access to proprietary information. Just picture the headlines: CEO’s Lost iPhone Leads to Customer Data Breach.
The Good News
Mobile device management (MDM) solutions can help protect against the threats that are out there. Mobile Device Management helps you make sure critical information is protected no matter how your clients’ employees access it.
MDM gives you the ability to enforce minimum security requirements on mobile devices that access your client networks, which helps protect against data compromise. Lost devices can be found with geo-location tracking. If they don’t turn up, the devices can be remotely wiped to protect data with a just a few mouse clicks. Security settings can be adapted to require passcodes, set a time before auto-lock, auto-wipe devices after a maximum number of failed login attempts, and more.
The point is, MDM keeps your clients’ networks better protected. The extra layer of data security gives your clients peace of mind and helps you maintain your role as a trusted advisor. With that in mind, what do you need to look for in an MDM solution?
If you really want to get the most from your MDM solution, look for one that’s going to work easily with your existing solutions. Integration with your remote monitoring and management (RMM) platform and other automation solutions will save you time in setup and implementation, and will enable your technicians to manage mobile devices through the same interface through which they’re already managing your clients’ computers.
In short, the right MDM solution means you’ll be better able to protect vital data from mobile security risks while keeping your clients’ users connected to the information they need to do their jobs.
Now you know what MDM can do to keep your clients safe from mobile threats. Check back next week for tips to help you explain the benefits of Mobile Device Management to your clients and make the sale.
This article was provided by our service partner Labtech.
Set Up Extensions on a Cloud Based FreePBX
One of the best things about modern VoIP systems is how flexible they are when it comes to how you deploy them. You can use them on an appliance, virtualized, or on a cloud-based service like Amazon AWS, Google Cloud, or Microsoft Azure. Each configuration has a slightly different technique to making everything work, and one of the first challenges is registering extensions. For this post, we’ll focus on the general concepts of setting up extensions for a cloud based (hosted) solution with FreePBX.
If you’ve never heard of FreePBX, and you’re in the market for a new VoIP system, you should start doing a little research ( and also call VoIP Supply). To be brief, it’s a turn-key PBX solution that uses Asterisk, a free SIP based VoIP platform. Sangoma, the makers of FreePBX have created a web user interface for Asterisk to simplify configuration. They’ve also added an entire security architecture, and have added a lot of features above and beyond what pure Asterisk (no user interface) provides, such as Endpoint Manager, which is a way to centrally configure and manage IP Phones.
FreePBX isn’t the only product out there to do this, there’s quite a few out there actually, but FreePBX has really raised the bar in the past few years and has become a very series solution for the enterprise. Don’t let the word “Free” in FreePBX lead you to think it’s a cheaply created system.
FIRST, A LITTLE ABOUT VOIP CLOUD SECURITY:
There’s a huge benefit to hosting a VoIP system in the cloud, you have to deal with very little NAT. Why is that good? SIP and NAT generally do not cooperate with each other. It’s very common for SIP header information to be incorrect without a device such as a session border controller (SBC), or a SIP application layer gateway (SIP ALG). When deploying a system on premise, you will always need to port forward SIP (UDP 5060) and RTP ( UDP 10,000-20,000) at a minimum. Also, you’ll need to make sure these ports are open on your firewall. This helps direct SIP traffic to your phone system, similarly as if you had a web or mail server.
Of course, there are security concerns when exposing SIP directly to the internet, and the same concerns apply for a hosted system, but when dealing with a cloud solution, you are generally given a 1:1 (one to one) NAT from your external IP address to the VoIP system’s internal IP. A 1:1 NAT ensures all traffic is sent to the system without any additional rules. Some cloud services place an external IP address directly on your server, increasing simplicity.
If you’re reading this, and are becoming increasingly concerned, you’re not wrong. If you’re in the technology field, you’ve probably been taught that exposing any server directly to the internet is wrong, bad, horrible, and stupid. Generally speaking, that’s all correct, but luckily many cloud service providers will offer the ability to create access control lists to place in front of your server, like the one below from Microsoft Azure.
This gives you the ability to control access to specified ports, source, and destination IP addresses. Additionally, FreePBX has built in intrusion detection (Fail2Ban), and a responsive firewall, allowing you to further restrict access to ports and services. Is this hack proof? No, of course not. Nothing is hack proof, but I have run my personal FreePBX, exposed directly to the internet, with zero successful attacks. No, that’s not a challenge, and you can’t have my IP address. You can, however, have some of the would-be hacker’s IP’s (see below).
If you’d like to learn about the firewall that FreePBX has put together, go here. I’m not suggesting, that this is just as good as placing an on-prem VoIP system behind a hardware firewall, but the results so far are that it works very well. Using a cloud solution will always be at your own risk, so do plenty of testing and take whatever measures needed to secure your system (disclaimer).
SETTING UP (REMOTE) EXTENSIONS:
One of my favorite feature of a cloud based system is that all extensions are essentially remote extensions. This means you can place a phone anywhere in the world, in theory, with an internet connection, and place calls as if you were sitting in the office, or at home. There are some variables to this configuration, mainly restrictions on whatever network your phone is connected to, but generally speaking, it’s a useful and user-friendly solution. Now, for the rest of the article, I will assume that you know how to create an extension on FreePBX and have basic familiarity.
The first thing I typically do when deploying a new VoIP system is to define all of the network information for SIP. This is important for both cloud systems, and on-prem, Specifically, you need to tell FreePBX what networks are local, and which are not. To accomplish this, proceed to Settings > Asterisk SIP Settings, and define your external address, and local networks.
Next, if you have your firewall turned on and you should make sure SIP is accessible. You’ll notice in the below image that the “Other” zone is selected, meaning I have defined specific networks that are allowed under Zones> Networks. To allow all SIP traffic, you can select “External,” but you would be better off enabling the Responsive Firewall, which rate limits all SIP registration attempts and will ban a host if a registration fails a handful of times.
Also, something to pay attention to: Make sure you use the right port number. By default, PJSIP is enabled, and in use in FreePBX on port 5060 UDP. I will generally turn off PJSIP and re-assign 5060 USP to Chan SIIP. This can be adjusted under Settings > SIP Settings > Chen SIP Settings, and PJSIP Settings.
Once the ports are re-assigned, you MUST reboot your system, or in the command line, run ‘fwconsole restart.’ I also like to tell FreePBX to use only Chan SIP. To do that, go to Settings > Advanced Settings > SIP Channel Driver = Chan SIP. PJSIP is perfectly funcitonal, but for now, I recommend you stick with CHAN SIP as PJSIP is still underdevelopment.
We should also assign the global device NAT setting to “Yes”. This will be the option used wheneber you create a new extension. Without making this the global default, you will have to make this change manually in each extension, when you’ll likely forget to do, and your remote extension will not register. This setting lets FreePBX know that it can expect the IP phone or endpoint to be external and likely behind a NAT firewall. To change this global setting, go to Settings > Advanced Settings > Device Settings > SIP NAT = Yes.
Lastly, make sure your extensions are using SIP, if you haven’t turned off PJSIP. You can convert extensions from one channel driver to the other within an extension’s settings.
At this point, you should be able to register your remote extensions to your cloud based FreePBX system. If you are running into trouble, run through these troubleshooting steps:
I hope this article sheds some light on the topic of cloud based VoIP systems, and how to set up extensions for that system. I also hope this saves you a few hours in troubleshooting if you are not well versed in FreePBX configuration. As a friendly reminder, before you make any changes to your production system, take a backup, or snapshot, and always test your changes.
What You Should Know about Vendor Management
The contingent space is changing, and it’s up to CEOs, CFOs, and CPOs to understand and analyze what is happening in order to ensure that their companies are addressing the contingent workforce tsunami that is soon to come. Your on-demand, non-permanent workforce is a dynamic ecosystem—one that is subject to change and evolve at any moment. You must be prepared in order to ensure continued cost savings, efficiency, and productivity.
The contingent workforce presents its own challenges, particularly when it comes to management. When different hiring managers and department heads use different vendors to source their talent, using different processes, services, and price points, the result is a largely disjointed and ineffective model that is riddled with inefficiencies and inconsistencies that affect your bottom line. Such a model will lead to suffering supplier relationships and missed opportunities for great talent.
The Managed Services Provider’s Role
As the contingent workforce continues to grow at a rapid rate, it becomes more critical than ever for your company to get a solid grasp on how to effectively manage your on-demand workers. Turning to a managed services provider (MSP) can allow you to optimize and streamline your contingent workforce management. As this workforce grows, it becomes more complex, and as such, it requires a greater level of expertise that an MSP can provide.
An MSP can analyze and design your program, review and re-engineer your process, optimize your supply-base management, implement change management, and create active cost management that will result in better value for your dollar. An MSP can take all of your people-based transactions and create a powerful marketplace where you can create a competitive bidding process, consolidate spend, and save an incredible amount of money all while exceeding services.
Vendor Management Technology
How does an MSP provide your organization with an ROI? Apart from its valuable experience and expertise in workforce management, it also uses vendor management technology. With the right technology, your managed services provider can create greater efficiencies, higher productivity, increased compliance, and high cost savings through increased visibility.
Vendor management technology can automate and streamline the processes that support the sourcing, acquisition, management, and payment of your on-demand workforce, including requisitions, approvals, time management and scheduling, expense management, payments, and reporting.
When you have big data working on your side to track, measure, and analyze your processes and your workers, you can make more informed decisions, save time, and save money.
Why a Neutral MSP Is Critical When It Comes to Vendor Management Technology
Not all vendors or technologies are created equal. Some will allow for better results than others, and it’s critical to have a neutral MSP that will make informed decisions based on your unique business culture, needs, and goals when making suggestions and recommendations.
MSPs that remain technology and vendor neutral can ensure that all decisions are made to support your exact needs and to add the most value to your business. Having the right vendor management system is critical for your ROI. A neutral MSP will not own or operate any specific system or technology. Rather, it will use total objectivity when determining which platform your company should use.
Transform Your Business for the Better
An effective vendor management system includes the support from a truly neutral managed services provider as well as the right technology. When you let the experts handle the workforce management processes and the technologies required for you to see the best ROI, you can have the best people, the best practices, and the best technology working to achieve your goals. You can bring order to your contingent workforce and your supply chain in order to positively transform your bottom line—and your business.
5 Must-Have Features of Your Remote Monitoring Solution
As a technology solution provider (TSP), chances are you have a desire to take your business to the next level. The TSPs that are successful in this endeavor have a key ingredient in common: they are armed with the right tools for growth. The most critical tool for success in this business is a powerful remote monitoring and management (RMM) solution.
So the question is, what should you be looking for when you purchase an RMM tool, and why are those features important to your business?
The right RMM tool impacts your business success with five key benefits. With a powerful and feature-rich RMM solution in place, you can:
Automate any IT process or task
Work on multiple machines at once
Solve issues without interrupting clients, especially with the help of IT Support Naperville and other reputed agencies as them.
Integrate smoothly into a professional services automation (PSA) tool
Manage everything from one control center
To better understand why these features are so influential, let’s talk a little more about each of them.
1. Automate Any IT Process or Task
Imagine being able to determine a potential incident before your client feels the pain, and fix it in advance to avoid that negative business impact. Being able to automate any IT process gives you the proactive service model you need to keep your clients happy for the long haul.
2. Work on Multiple Machines at Once
To solve complex issues, an TSP must be able to work on all the machines that make up a system. If you are attempting to navigate this maze via a series of webpages, it is hard to keep up with progress and makes it easy to miss a critical item during the diagnosis. Having the ability to work on multiple machines at once is paramount to developing your business model and maximizing your returns.
3. Solve Issues Without Interrupting Clients
One of the biggest challenges that MSPs face is fixing issues without impacting their clients’ ability to work. With the wrong tools in place, the solution can be nearly as disruptive as the issue it’s meant to fix. The right tool must allow a technician to connect behind the scenes, troubleshoot and remediate the problem without impacting the client’s ability to work.
4. Integrate Smoothly into a PSA Tool
Two-way integration between your RMM and PSA solutions eliminates bottlenecks and allows data to flow smoothly between the tools. The goal of integration is to enable you to respond more quickly to client needs as well as capture and store historical information that leads to easier root cause analysis.
A solid integration will also increase sales by turning data into actionable items that result in quotes and add-on solutions. The key areas to examine when looking at how a PSA and RMM integrate are:
5. Manage Everything from One Control Center
The control center for your RMM solution should be the cockpit for your service delivery. Having the ability to manage aspects that are directly related to service delivery such as backup and antivirus from the same control center keeps your technicians working within a familiar environment and speeds service delivery. Also, it cuts down on associated training costs by limiting their activities to the things that matter on a day-to-day basis.
Success means equipping your business with the right features and functionality to save your technicians time while increasing your revenue and profit margins. Selecting an RMM solution that solves for these five influential features is the key to getting started down the path to success. What are you waiting for?
This article was provided by our service partner : Labtech.
Best practices for software license management
Software license management is the process that ensures that the legal agreements that come with procured software licenses are adhered to.In a basic sense, it ensures that only legally procured licenses are deployed on systems.
Organizations spend a fortune on licenses every year, and a lack of management around it can result in heavy fines. In some cases, CIOs of certain organizations have been taken into custody for violating norms.
In this article, I will provide the basic concept of how software license management works along with a process map.
Prerequisites for implementing license management
1. Software Asset Management Tool – Organizations will have to invest in database software that is capable of recording various types of licenses against its respective owners. There are numerous software asset management tools available in the market, including free ones. All-in-one database tools are capable of storing deployment details of software along with license details. Popular ones on the market include FlexNet Manager by Flexera Software, Software Asset Management by Microsoft and License Manager by License Dashboard. To find a reliable vendor click here to find out more.
2. Software License Auditor Tool – The auditor tool runs over the company network, and identifies the deployed licenses across all systems of the network. The tool deploys its agents across all systems and in turn, these agents report the installed licenses to the central engine, which consolidates the total licenses residing on the network. An auditor tool that is capable of connecting through APIs with the software asset management tool should be preferred.
3. Asset inventory – Before getting into licenses, it is absolutely necessary to have an asset inventory with identified owners. The inventory must account for all systems in the organization, at least the operational ones. And, every system must have an owner or a name of someone who is accountable for what gets installed on the system.
4. People and processes – Management of software licenses does not happen with these tools alone. You also have to have dedicated license managers and processes that maintain compliance. The processes woven around software licenses must ensure 360-degree control over licenses purchased, deployed, archived and those that have expired.
Step 1: Obtain all procured license details
The starting point to software license management implementation is to find out where you currently are and what you own. You’ll store all the procured licenses identified.
Not all licenses come in the same shape and color. Here are some of the most common types:
Named user license – A license that can be used by a particular user
Volume license – A single license can be used on multiple systems depending on the purchased volume
License under enterprise agreement (EA) – Similar to volume license, terminology used by Microsoft when volume licenses exceed 250
Concurrent Licenses – Licenses that can be installed on any number of machines but can be used on a limited number of machines at any given point
OEM (Original Equipment Manufacturers) – Software license that accompanies hardware
Evaluation – Trial license which may come with limited functionality or for a definite period of time
Free License – License that is available for free
Step 2: Identify all license deployments
Once you build a baseline of licenses owned by the organization, you’ll identify the deployment of these licenses across the organization. The software license auditor tool would be a big help in identifying the deployments. Manual inventory of software licenses, even if script driven, is a big no no.
All the license deployments would be consolidated together to identify how many machines are deployed with individual licenses. In the crude example that I have displayed above, the OS license is installed on three systems while the DTP license on two.
Names of system owners are important too if you are dealing with a named user license. Asset inventory comes in handy in identifying system owners.
Step 3 and Step 4: Compare license purchase vs. license deployments
In most cases, you should be able to tell if your organization is in software license compliance or not by comparing the procured licenses against deployed ones.
(As I mentioned earlier, named user licenses can be in non-compliance even if the number of installations is under control but against a wrong set of people.)
Step 4: Uninstall or procure licenses
If you are in compliance, there is nothing to fret. If you’re not, there are two logical choices – purchase more licenses or uninstall software from certain machines.
What next?
In this blog, I’ve given you ways to get a snapshot of what to expect with license compliance. But how do you ensure you remain in compliance at all times? There are solutions and processes to ensure that an organization stays in compliance at all times. I will introduce them in my next piece.
Select the Right Managed Print Service
We are moving toward the world of digitization as all businesses heavily rely on digitizing data. However, this change has not affected our mentalities and even today all the important, as well as routine, business activities and information is in hard copy. Not only that, but the use of print has been a difficult habit to change. A recent survey found an average 14% of global company revenue is being wasted on document related activities, owing to inefficient printing practices. In order to eliminate such expenses, it is important to adopt a Managed Print Service.
What is MPS and how it can be beneficial?
“Managed print services (MPS)” is a service offered by the external provider to manage and optimize document output and meet the printing needs of the entire firm. It also takes care of all the devices associated with the need for printing like printers, copiers, fax machines and multifunction devices.
MPS can not only reduce print related expenses, but it also helps to gain predictability about the expenses and gain better visibility. However, in order to avail the optimum benefits it is important to select the right MPS provider that can help the company to reduce the repair and maintenance costs. Below there are 5 tips that will help you to select the best MPS provider:
1. Should be able to provide multi-vendor support
Generally a company is expected to have all the same printing related equipment, but there can be exceptions. Hence, it is important to select an MPS provider that can provide services for all the type of equipment and should have the necessary expertise for required services and repairs. They should be able to provide necessary IT expertise for a wide variety of equipment.
2. Provide good customer experience
The MPS provider should be dedicated to improving the efficiency of print infrastructure, keeping in pace with the changing needs of the office. They should focus on improving the performance of the customers and enhance employee productivity. As per the survey done by IT advisory company Quocirca, it was found that 30% of the organizations viewed printing as a critical component for their business. Hence, it is important that the provider should be able to make printing a hassle free process and boost the company’s productivity.
3. Meet the demand for mobile printing
With the changes in the modus operandi of the businesses there is an increase in the demand of mobile printing. The businesses are functioning in a non-traditional environment and it has lead to an increase in the need for mobile printing. The MPS provider should be able to offer the apps or extensions that can fulfill the needs of mobile printing. They should be able to provide smart managed print services that can optimize and streamline the processes as well as workflow.
4. Should focus on enhancing the productivity
The MPS provider should not only focus on providing the print solutions, but should also be able to manage it more effectively and efficiently. They should be able to analyze the information regarding the printer usage and make necessary changes or recommendations in order to increase the fleet efficiency. It should be able to provide the required services as per the changing needs of the business. They should be able to streamline the workflow and provide cost effective solutions.
5. Maintain flexibility and consistency
The MPS provider should be ready to evolve and change as per the changing needs of the business and help them to face the challenges. Some companies may want to opt for a single vendor, while others might opt for multiple vendors. The MPS provider should be flexible enough to provide the solutions as per the needs of the company. Moreover, there has to be a consistency in standards of delivering the services across several locations over a span of time.
Summary:
The advantages of adopting MPS are way beyond reducing printing and paper usage. It can help your firm to streamline the process and workflows by managing fewer devices, fewer pages and lowered costs. It also helps you to enhance the knowledge of worker productivity and manage the increasing information and data.
What can be considered ‘warranty management’ for a managed IT service?
In the plethora of IT offerings companies are faced with, products and services have become extremely competitive not only with regards to price, but also in offering their assurance that what they offer is of good quality, will last in time and can deliver on its promise. As this has become the norm, no business would dare buy hardware or software that came without a written warranty. But how can organisations have some sort of guarantee of quality and efficiency when what they want to buy is not a product but a service?
Best practice is designed to understand the utility and warranty of any investment and it is important the distinction between the two is understood. The utility of an investment is the recognition of whether it is ‘fit for purpose’; the warranty goes beyond that to recognise whether your fit-for-purpose product is actually fit for use.
Firstly, it is important to understand which aspects are central in defining what can be identified as ‘warranty’ for a managed service. A good track record is of course imperative for the Service Provider, but this does not necessarily mean a very large number of clients of all types and sizes. Larger and widely-known Service Providers are not automatically the best choice for an organisation – can they understand your particular business, give you what you need and deliver the most cost-efficient service? You will find that a provider which is specialised or has relevant experience in dealing with organisations that are very similar to yours in type, size and needs might be the best choice for you. So this is what you should look at as a guarantee: a provider that has successfully carried out projects for clients that are similar to your organisation.
At the same time, it is important that the provider does not offer you an out-of-the-box solution for ‘all organisations like yours’. You might be similar in your structure and needs to other organisations, but this does not mean that you do not have some important differences. For example, NHS clinics all have similar needs and structure, but are very different in the way they deal with them – most clinics will use customised software and have different types of end users. The same is true for financial firms, from banks to private investment or currency exchange firms, where efficient and tailored IT is a vital element for their success. In fact, every sector is vastly different, so in a selection exercise, be sure to understand the Service Providers you are talking to can offer positive evidence that they have supplied similar solutions. Further to that, Service Providers that service a wider range of sectors will typically have a greater advantage in providing bespoke or ‘tailored’ solutions for your organisation.
These aspects are crucial in your choice of Service Providers, but what can guarantee the quality of the actual service itself? This mainly lies in the Service Level Agreement (SLA), which outlines agreed levels of performance monitored through certain metrics such as First-Time-Fix rate, calls answered within a set time, Abandonment rates, etc. These targets need to be consistently met, and if they are not, the Provider will be in breach of the SLA, which can have a financial impact. Consistently missing targets might mean the Provider losing the client and, in the long run, their reputation as well. With these metrics in place, it is in the provider’s own interest to perform at their best and not incur in fines or contract termination.
The choice of SLAs can make the difference between real and perceived efficiency and inefficiency. It is good practice to spend some time deciding, together with the Provider, what metrics to adopt (some will be more relevant than others) and where to set targets. Metrics have to be very detailed – setting a typical ‘70 % First Time Fix rate’ on its own is not enough. Ask yourselves: what counts as FTF? It normally refers to simple and common issues dealt with by Service Desk staff; but should printer cartridge replacement be considered a FTF even if it’s done by desk-side engineers? If some end users insist in a desk visit will it not be included in the FTF rate? This allows to have a clearer picture of how efficient of inefficient the service is and to understand if a managed service solution is right for your organisation or should be somehow modified to improve performance.
These metrics need to be tangible and agreed before they are incorporated into a live service.
In conclusion, we could say that a ‘warranty’ for a managed service should cover both the Service Provider and the service offered. It is a guarantee of quality if the Service Provider has the right track record for your company and the appropriate SLAs are in place, as well as fines and penalties for breach of the agreement. Only by carefully choosing the Service Provider which will manage your IT service it is possible to achieve efficient IT which is able to support and enable business success whilst bringing cost savings and general efficiencies to working practices.
———————————————————————————————————————-
This piece has been published on ITSM Portal.
Migrating to the cloud backups
We have already talked about how secure backups can be in a cloud environment and what the cost may be of not leveraging the potential of DRaaS. The next step would be to start thinking about how to migrate your infrastructure or backups/replicas to cloud backups and at what scale it has to be done. We will review the main points that you need to consider and check prior to initiating your move into the world of cloud.
Who can benefit from the cloud?
The short answer is a bold one: Everyone. Regardless of the size of the operation, there is a good incentive in road mapping your migration over to the cloud as it brings a whole new level of accessibility, scalability and long-term cost savings. But what does that really mean?
When it comes to conventional disaster recovery sites, it’s hard to plan everything beforehand because you have no way of knowing when the disaster is going to strike and at what scale. You’re only as flexible as the hardware that you’re provided with. Any additional capacity would require time and more money to acquire and install.
That’s where the cloud steps up the game. You are presented with a variety of options that allow you to build a flexible DR environment with the ability to grow and shrink its capacity at will. The only price you’ll pay is for the actual hardware in use, thus granting an incredible scalability that is ready for any DR needs. Not every provider possesses such ability at a full scale, but there’s plenty of options to pick from based on your particular needs.
The two approaches Veeam has for businesses with on-premises deployments wanting to get backups or replicas to the cloud are Backup as a Service (BaaS) and Disaster Recovery as a Service (DRaaS). These approaches utilize cloud and service provider technologies which are flexible enough for any use case and you can avoid the cost and complexity of building and maintaining an offsite infrastructure.
So, how hard is it to migrate to the cloud?
What’s important to remember is that migrating data to the cloud is not a one-day feat and is a project that will require planning and a timeline. However, depending on what data management software you use, getting data offsite to the cloud can be a very simplified experience.
Migrating to the cloud certainly doesn’t require you to drop all the investments in your existing DR infrastructure, should you have one. If you’re already running an on-premises infrastructure, then you know that any hardware has its lifecycle and will eventually be replaced. So, you can plan to move your servers and applications to the cloud environment as the time for hardware renewals shows up on the calendar.
If you’re just starting off at the stage of designing your infrastructure then it would be even more beneficial, as you are getting high-class disaster-proof hardware used on Enterprise levels of operation at an affordable price and right-away at your disposal. No need to worry about building and maintaining your own DR site, all the more so about the time to set everything up from scratch.
In any case scenario, Veeam® has the tools to make your migration to the cloud as easy as your daily backup tasks. In fact, even though Veeam Cloud Connect Backup and Replication are used for archival purposes and providing continuous synchronization, they’re a perfect instrument for migrating your infrastructure to the cloud without any hassle.
What should be migrated first?
The first contenders are the servers that will fully benefit from the flexibility and added performance of the cloud. But, not every server or application needs to or can be migrated right away. You need to plan it in the way that won’t obstruct your production performance more than usual hardware migration or upgrade. It’s important to make sure the migration to the cloud won’t cause you trouble during the process or after the completion. That can be done by testing the performance of servers or applications in the lab to find out about any hiccups beforehand. Sometimes an existing set of dependencies, like an on-site SQL database or Active Directory, can make it harder to simply move some applications without correcting their workflow.
In such scenarios the use of hybrid cloud might be helpful. In a hybrid setup one part of your cloud infrastructure is private and running under your full control on-premises and the other part is in public cloud, making use of all the servers that are easily moved to cloud or will benefit from it the most.
Where do you start?
No matter the size of the infrastructure, Veeam Cloud Connect offers a solution to fully control and easily migrate on premises data to highest standard cloud environments – requiring no network (VPN) setup or change to the customer environment. And whether you plan on implementing a big bang migration strategy or the trickle migration strategy, Veeam Cloud Connect allows for both methods.
_________________________________________________________________________________________________________________
This article was provided by our service partner Veeam
Intel igb/e1000 driver showing dropped packets on the interface
Recently I ran into a strange issue where the Intel NIC was showing dropped packets on the interface. This particular server was having other issues (performance-ish type) so we were eager to get to the bottom of this.
Symptoms and interesting finds…
A solution, though not perfect was finally discovered. Disable BPDU/STP on the switch. The environment only had one switch so it wasn’t huge issue. On the Cisco the command was:
Some interesting resources on this:
https://forums.suse.com/showthread.php?1320-Mystery-RX-packet-drops-on-SLES11-SP2-every-30-sec