On October 12th, Microsoft released their latest server operating system – Windows Server 2016. To ensure your success, we’ve gathered a list of the top 5 things you need to know.
We’ve been preparing for Windows Server 2016 for the past couple months, and even attended Microsoft Ignite a few weeks ago, to make sure we’re up to date on all the latest and greatest news.
While TechNet has already published a “What’s New in Windows Server 2016” article, at ConnectWise we want to take you a bit deeper and call out a few things technology solution providers like you should be aware of.
Patching
Windows Server 2016 continues Microsoft’s move to deployment rings. Windows 10 introduced 6 deployment ring options spread across 3 phases (also known as servicing branches):
Insider – 1 ring
Current Branch (CB) – 2 rings
Current Branch for Business (CBB) – 3 rings
Then, enterprise customers wanted an even slower option, so a special edition of Windows 10 was released called Windows 10 Enterprise Long-Term Servicing Branch (LTSB) – which essentially added a fourth phase / seventh deployment ring.
With Windows Server 2016, the installation option you choose will determine which servicing branch you default to. Server 2016 with Desktop Experience and Core will both default to the LTSB, which is great for reducing problems in a production environment. Just be aware that the LTSB won’t include certain things, like Edge browser.
Nano
There’s been a ton of hype about the Nano Server option. But before you start spinning them up in production, you should know that Nano Servers don’t use the LTSB (see above). Instead, they default to the CBB, which means more frequent patches (CBB is Phase 3. LTSB is Phase 4).
Given some recently reported issues with the Windows 10 Anniversary Update, we’ll let you decide whether this is a good idea or not for your business and clients. Also, it’s important to note that Nano Servers requires Microsoft Software Assurance.
Licensing
Speaking of Software Assurance, you may have noticed that Supermicro Servers are changing how they license certain editions of server options, just like the Microsoft Windows Server 2016.
Back in 2013, Microsoft introduced core-based licensing because processors weren’t a precise enough measure (since each processor can have a varying number of cores). Though, you could still get Datacenter and Standard editions under the processor-based licensing model.
Starting with Server 2016, processor-based licensing is no longer available for Datacenter and Standard edition. If you were lucky enough to renew your Software Assurance agreement recently, this won’t apply to you until renewal.
Even then, during renewal, you’ll get 16 core licenses for each applicable on-premise processor license and 8 core licenses for each service provider processor license.
Containers
On the plus side, if you opt for Datacenter or Standard under the core-based licensing model, you’ll now be able to use one of the most talked about features of Server 2016 – containers!
For anyone that’s not familiar with containers, Microsoft considers them “the next evolution of virtualization” and they come in two flavors:
Windows Server containers
Hyper-V containers
With either of the core-based editions for Server 2016, you can run unlimited Windows Server containers by sharing the host kernel. If that’s a security concern for you or your clients, then you’ll want to use Hyper-V containers to isolate the host’s kernel from each container.
Just know that unlike Windows Server containers, you can only run 2 Hyper-V containers on each Standard edition server. If you want unlimited Hyper-V containers, you’ll need Datacenter edition. But whichever choice you make, both types of container can work with Docker.
Windows Defender
When upgrading to Windows Server 2016 from a prior version with antivirus installed, you may run into problems. That’s because the upgrade process installs and enables Windows Defender by default.
Luckily, whether the user interface is enabled or not (which seems to depend on edition), there’s a quick PowerShell command you can run to disable Windows Defender entirely:
Uninstall-WindowsFeature -Name Windows-Server-Antimalware
(Bonus) Modern Lifecycle Policy
While not directly related to Windows Server 2016, here’s a bonus that partners should be aware of: Microsoft has announced their new Modern Lifecycle Policy. For now, this policy only applies to four Microsoft products:
System Center Configuration Manager (current branch)
.NET core
NET
Entity Framework core
The new policy essentially says that Microsoft will only support the current version and once they announce End of Life for a product, you have 12 months before support ends.
Given the heavy push to Microsoft’s new serving model for Windows 10 and now Server 2016, it’s a safe bet that the list of products this policy applies to will grow.
When it comes to the release of Windows Server 2016, there’s a lot to digest (known issues, PowerShell 5.0, WMF 5.1, Just Enough Administration, IIS 10).
Given the number of clients you support that may ask about upgrading older systems or virtualizing, we’re sure you’ll have plenty of opportunity to learn more… but before your clients ask, we wanted you be aware of some of the business and technical nuances.
This post was provided by one of our service providers ConnectWise.
6 Steps to Client Onboarding Success
Client onboarding is the first time new clients get to see how you operate. It’s when first impressions are formed; impressions that could have a lasting impact. And if you don’t deliver on promises that were made during the sales process, what impression do you think they’ll be left with?
To make sure your client relationship starts off on the right foot (sorry lefties), you just need to follow a few simple steps.
1. Have a Plan
I’m always surprised to learn just how many people fail to use a project plan. I can’t stress this enough; a templated project plan is key to transforming your client onboarding process from mass chaos to a seamless, automated process. Outline every step that has to take place from the date the contract is signed to service go-live.
2. Use Time-Saving Automation
Using an IT automation platform, such as ConnectWise Automate (formerly LabTech), can cut hours off of the manual engineering tasks many of us still do today. Let’s look at some of the places you can shave a few hours from the client onboarding process.
3. Optimize and Secure Endpoints
Automate detects more than 40 different antivirus (AV) vendors, so let it handle the AV rip and replace process. As part of the security rollout, you’ll also want to deploy a second opinion scanner, such as HitmanPro, to automatically scan for and remediate any security issues your AV software might miss. Follow that up by deploying desktop optimization software, such as CCleaner, to get those systems running smoothly without a technician ever having to touch a single desktop.
4. Software Deployment
You’ll need to make sure common applications, such as Adobe and Java, are installed and updated. You can automate this task. Using some simple logic, the Automate script engine can easily search for missing or outdated software and then install or update accordingly. No more combing through reports or visiting each desktop to find out what’s there and what’s not.
5. Policy Deployment
Missing a critical error at any stage of the game can be detrimental; missing it during onboarding is simply unacceptable. Automate intuitively detects a machine’s role, determines which policies should be applied and automatically applies the right ones. Never again get that awkward phone call from your new client asking why their email isn’t working, and you didn’t know about it because someone forgot to apply a monitor template.
6. Educate Your Sales Team
After your project plan is in place and your automated processes are built, it’s time to educate your sales team. Let them see how the onboarding process works and how long it really takes, so they can set realistic expectations from the start.
8 Essential Steps to Implement IT Best Practices
In the past, we’ve defined best practices and looked at how they benefit your business. Now let’s talk about how to implement best practices so you’ll start seeing results.
Implementing best practices is just like any other project you take on. Success comes from accounting for every detail. Make sure you have these 8 things covered when implementing best practices in your IT business:
Growing your business with best practices means happier customers, more productive employees and a better bottom line. Use these 8 tips to streamline best practice implementation, so you’ll see results fast.
Top 5 Best Practices for your Help Desk
A Help Desk is designed to be the first point of contact for customers when they have requests or problems with their technology services. And you, as the technology service provider are responsible for addressing those issues as quickly and efficiently as possible. It is essential, then, to ensure a strategic method of managing this single point of contact for requests and issues. This will include tracking inbound and outbound ticket processes, escalation procedures, and ticket resolution.
Good luck finding clients that are ok with issues slipping through the cracks and hanging out there for extended periods of time. People just won’t stand for it, so to ensure this doesn’t happen, check out our Top 5 Best Practices for your Help Desk.
Everything is a Ticket – All incidents and requests must be a ticket to properly capture all work performed, regardless of length, nature, or severity of the request.
Keep Customers in the Loop – Leverage Closed Loop to communicate with the customers. You should be updating them on progress and the status of their service requests.
All Roads Lead to Rome – Rome being your service boards, everything ends up as a service ticket on your service boards regardless of the source. The service board is what then controls your next step through workflows.
My Life is My Service Board – Help Desk employees work service tickets on their assigned service boards in order of assignment and the service level agreement’s priority, urgency, and impact.
All Time, All of the Time, On Time – All employees must enter all time worked, on everything they work (all of the time), as it happens (on time).
Microsoft enhances troubleshooting support for Office365
There’s a new tool from Microsoft for Office365 that scans files for headache-inducing problems in OneDrive for Business
It appears that last week Microsoft added a new and largely unheralded capability to the Office 365 checker tool.
A change to Microsoft’s main troubleshooting article for OneDrive for Business, KB 3125202, added a reference to an option in the Microsoft Support and Recovery Assistant for Office 365 tool that can be used to scan for files that are too big, file and folder names that have invalid characters, for path names that exceed the length limit, and several other headache-inducing problems. This appears to be a new capability for the Office 365 checker tool.
Here’s what the new information says:
This looks like an excellent tool for anyone troubleshooting OneDrive for Business problems.
This is a repost from InfoWorld
Microsoft to revamp its documentation for security patches
Microsoft has eliminated individual patches from every Windows version, and Security Bulletins will go away soon, replaced by a spreadsheet with tools
With the old method of patching now completely gone—October’s releases eliminated individual patches from every Windows version—Microsoft has announced that the documentation to accompany those patches is in for a significant change. Most notable, Security Bulletins will disappear, replaced by a lengthy list of patches and tools for slicing and dicing those lists.
Security Bulletins go back to June 1998, when Microsoft first released MS98-001. That and all subsequent bulletins referred to specific patches described in Knowledge Base articles. The KB articles, in turn, have detailed descriptions of the patches and lists of files changed by each patch. The Security Bulletins serve as an overview of all the KB patches associated with a specific security problem. Some Security Bulletins list dozens of KB patches, each for a specific version of Windows.
Starting in January, we’ll have two lists—or, more accurately, two ways of viewing a master table.
Keep in mind that we’re only talking about security patches and the security part of the Windows 10 cumulative updates. Nonsecurity patches and Win7/8.1 monthly rollups are outside of this discussion.
To see where this is going and to understand why it’s vastly superior to the Security Bulletin approach, look at the lists for November 8, this month’s Patch Tuesday. The main Windows Update list
shows page after page of security bulletins, identified by MS16-xxx numbers, and those numbers have become ambiguous. See, for example, MS16-142 on that list, which covers both the Security-only update for Win7, KB 3197867, and the Monthly rollup for Win7, KB 3197868. The MS16-142 Security Bulletin itself runs on for many pages.
Now flip over to the Security Updates Guide. In the filter box type
windows 7
and press Enter. You see four security patches (screenshot below): IE11 and Windows, both 32- and 64-bit. They’re all associated with KB 3197867.In the Software Update Summary, searching for “windows 7” yields only one entry, for the applicable KB number (screenshot below).
Here’s why the tools are important. On this month’s Patch Tuesday, we received 14 Security Bulletins. Those Security Bulletins actually contain 55 different patches for different KB numbers; the Security Bulletin artifice groups those patches together in various ways. The 55 different security patches actually contain 175 separate fixes, when you break them out by the intended platform.
There’s a whole lotta patchin’ goin’ on.
Starting this month, you can look at the patches either individually (in the Security Updates Guide) or by platform (in the Software Update Summary), or you can plow through those Security Bulletins and try to find the patches that concern you. Starting in January, per the Microsoft Security Response Center, the Security Bulletins are going away.
Of course, the devil’s in the implementation details, but all in all this seems to me like a reasonable response to what has become an untenable situation.
This is a repost from http://www.infoworld.com/
Cloud backup security concerns
Many CIOs are now adopting a cloud-first strategy and backing up and recovering critical data in the cloud is on the rise. If you don’t have a permanent CIO to manage your IT department, consider hiring an interim CIO. As more and more companies explore the idea of migrating applications and data to the cloud, questions like “How secure are cloud services?” arise. While there isn’t a standout number one concern when it comes to cloud computing, the one thing we can be sure about is that security is front and center in CIO’s minds. Veeam has identified the top two concerns from our recent 2016 customer survey to be security and price. See the graph of responses below:
Quite inevitably, cloud has come with new challenges and we’ll be exploring them all in this cloud challenges blog series. It has also come with some genuine security risks but as we will uncover, cloud backup security has more to do with your implementation of it to successfully ensure data security when moving to the cloud. With cloud, security has to be top priority. The benefits of flexibility and scalability you get from the cloud should not mean sacrificing any security at all.
What are the most important cloud backup security risks?
Stolen authentication/credentials
Attacks on data happen more often than not due to weak password usage, or poor key and certificate management. Issues tend to happen as multiple allocations and permission levels begin to circulate and this is where good credential management systems and practices can really help.
One-time generated passwords, phone-based authentication and other multifactor authentication systems make it difficult for attackers wanting to gain access to protected data because they need more than just one credential in order to log in.
Data breaches
Data breaches can be disastrous for organizations. Not only have they violated the trust of their customers by allowing data to be leaked, but it also opens them up to facing fines, lawsuits and even criminal indictments. The brand tarnishing and loss of business from such an event can leave a business with a long road to recovery at best.
Despite the fact that cloud service providers typically do offer security methods to protect tenants’ environments, ultimately you – the IT professional – are responsible for protection of your organization’s data. In order to protect even the idea of a breach, you need to become a fan of encryption. If you use cloud for storage, experts agree data should be encrypted at no less than 256-bit AES (Advanced Encryption Standard) before it leaves your network. The data should be encrypted a second time while in transit to the cloud and a third time while at rest stored in the cloud. It is important to do your research and enquire into the encryption used by the application, and by the service provider when the data is at rest in order to ensure safe and secure cloud backups.
Lack of due diligence
A key reason moving data to the cloud fails, becomes vulnerable or worse becomes subject to an attack or loss is due to poor planning and implementation. To successfully implement a cloud backup or disaster recovery strategy, careful and deliberate planning should take place. This should first involve considering and understanding all of the risks, vulnerabilities and potential threats that exist. Secondly, an understanding of what countermeasures need to be taken in order to ensure secure restore or recovery of backups and replication, such as ensuring your network is secure or access to key infrastructure is restricted. Due diligence in approaching the cloud should also involve an alignment of your IT staff, the service provider and the technologies and environment being leveraged. The service provider must be seamlessly integrated with the cloud backup and recovery software you plan to utilize for optimal security and performance of your virtualized environment.
Multi-tenant environment
Service providers offer cost-effectiveness and operations efficiencies by providing their customers with the option of shared resources. In choosing a service that is shared, it’s essential that the risks are understood. Ensuring that each tenant is completely isolated from other tenant environments is key to a multi-tenant platform. Multi-tenant platforms should have segregated networks, only allow privileged access and have multiple layers of security in the compute and networking stacks.
Service provider trust and reliability
The idea of moving data offsite into a multi-tenant environment where a third party manages the infrastructure can give even the boldest IT professionals some anxiety. This comes with the perceived lack of control they might have on cloud backup security. To combat this, it is essential to choose a service provider you trust who is able to ease any security doubts. There are a variety of compliance standards a provider can obtain, such as ISO9001 or SOC 2 & SSAE 16 and it’s important to take note of these as you search for a provider. In addition to standards, look for a service provider that has a proven track record of reliability – there are plenty of online tools that report on provider network uptime. Physical control of the virtual environment is also paramount. You must seek a secure data center, ideally with on-site 24/7 security and mantraps with multi-layered access authentication.
So, is the cloud secure?
Yes, the cloud is secure but only as secure as you make it. From the planning and the processes in place, to the underlying technology and capabilities of your cloud backup and recovery service. All these elements combined can determine your success. It is up to you to work with your choice of service provider to ensure the security of your data when moving to cloud backups or DRaaS. Another critical aspect is partnering with a data management company experienced in securely shifting and storing protected data in the cloud.
Veeam and security
We provide flexibility in how, when and where you secure your data for maximum security matched with performance. With AES 256-bit encryption, you have the ability to secure your data at all times: During a backup, before it leaves your network perimeter, during movement between components (e.g., proxy to repository traffic), for when data must stay unencrypted at the target and while your backup data is at rest in its final destination (e.g., disc, tape or cloud). It is also perfect for sending encrypted backups off site using Backup Copy jobs with WAN Acceleration.
You have a choice over when and where you encrypt backups. For example, you can leave local Veeam backups unencrypted for faster backup and restore performance, but encrypt backups that are copied to an offsite target, tape or the cloud. You can also protect different backups with different passwords, while actual encryption keys are generated randomly within each session for added backup encryption security.
Here are some links with more details on encryption and related information:
This article was provided by our service partner Veeam
Wireless authentication with usernames and 802.1x
If you’re at all interested in keeping your network and data secure it’s necessary to implement 802.1x. This authentication standard has a few significant benefits over the typical wireless network password used by many companies.
All enterprise network hardware supports 802.1x and NetCal can implement it to keep your network flexible, fast and secure.
5 Must-Have Features of Your Remote Monitoring Services
Remote Monitoring Services Features You Must Have
As a managed service provider (MSP), we have a desire to take your business to the next level. The MSPs that are successful in this endeavor have a key ingredient in common: they are armed with the right tools for growth. The most critical tool for success in this business is a powerful remote monitoring services and management (RMM) solution.
So the question is, what should you be looking for when you purchase an RMM tool, and why are those features important to your business?
The right RMM tool impacts your business success with five key benefits. With a powerful and feature-rich RMM solution in place, you can:
Work on multiple machines at once
Solve issues without interrupting clients
Integrate smoothly into a professional services automation (PSA) tool
Manage everything from one control center.
To better understand why these features are so influential, let’s talk a little more about each of them.
Automate Any IT Process or Task
Imagine being able to determine a potential incident before your client feels the pain and fix it in advance to avoid that negative business impact. Being able to automate any IT process gives you the proactive service model you need to keep your clients happy for the long haul.
Work on Multiple Machines at Once
To solve complex issues, an MSP must be able to work on all the machines that make up a system. If you are attempting to navigate this maze via a series of webpages, it is hard to keep up with progress and makes it easy to miss a critical item during the diagnosis. Having the ability to work on multiple machines at once is paramount to developing your business model and maximizing your returns.
Solve Issues Without Interrupting Clients
One of the biggest challenges that MSPs face is fixing issues without impacting their clients’ ability to work. With the wrong tools in place, the solution can be nearly as disruptive as the issue it’s meant to fix. The right tool must allow a technician to connect behind the scenes, troubleshoot and remediate the problem without impacting the client’s ability to work.
Integrate Smoothly Into a PSA Tool
Two-way integration between your RMM and PSA solutions eliminates bottlenecks and allows data to flow smoothly between the tools. The goal of integration is to enable you to respond more quickly to client needs as well as capture and store historical information that leads to easier root cause analysis.
A solid integration will also increase sales by turning data into actionable items that result in quotes and add-on solutions. The key areas to examine when looking at how a PSA and RMM integrate are:
Capturing billable time
Assigning incidents based on device and technician
Scheduling and automating tasks
Identifying and managing sales opportunities
Managing and reporting on client configuration information
A solid integration into a PSA will create an end-to-end unified solution to help your more effectively run your IT business.
Manage Everything from One Control Center
The control center for your RMM solution should be the cockpit for your service delivery. Having the ability to manage aspects that are directly related to service delivery such as backup and antivirus from the same control center keeps your technicians working within a familiar environment and speeds service delivery. Also, it cuts down on associated training costs by limiting their activities to the things that matter on a day-to-day basis.
Success means equipping your business with the right features and functionality to save your technicians time while increasing your revenue and profit margins. Selecting an RMM solution that solves for these five influential features is the key to getting started down the path to success. What are you waiting for?
This article was provided by our service partner Labtech.
Windows Server 2016: 5 Things You Need to Know
On October 12th, Microsoft released their latest server operating system – Windows Server 2016. To ensure your success, we’ve gathered a list of the top 5 things you need to know.
We’ve been preparing for Windows Server 2016 for the past couple months, and even attended Microsoft Ignite a few weeks ago, to make sure we’re up to date on all the latest and greatest news.
While TechNet has already published a “What’s New in Windows Server 2016” article, at ConnectWise we want to take you a bit deeper and call out a few things technology solution providers like you should be aware of.
Patching
Windows Server 2016 continues Microsoft’s move to deployment rings. Windows 10 introduced 6 deployment ring options spread across 3 phases (also known as servicing branches):
Insider – 1 ring
Current Branch (CB) – 2 rings
Current Branch for Business (CBB) – 3 rings
Then, enterprise customers wanted an even slower option, so a special edition of Windows 10 was released called Windows 10 Enterprise Long-Term Servicing Branch (LTSB) – which essentially added a fourth phase / seventh deployment ring.
With Windows Server 2016, the installation option you choose will determine which servicing branch you default to. Server 2016 with Desktop Experience and Core will both default to the LTSB, which is great for reducing problems in a production environment. Just be aware that the LTSB won’t include certain things, like Edge browser.
Nano
There’s been a ton of hype about the Nano Server option. But before you start spinning them up in production, you should know that Nano Servers don’t use the LTSB (see above). Instead, they default to the CBB, which means more frequent patches (CBB is Phase 3. LTSB is Phase 4).
Given some recently reported issues with the Windows 10 Anniversary Update, we’ll let you decide whether this is a good idea or not for your business and clients. Also, it’s important to note that Nano Servers requires Microsoft Software Assurance.
Licensing
Speaking of Software Assurance, you may have noticed that Supermicro Servers are changing how they license certain editions of server options, just like the Microsoft Windows Server 2016.
Back in 2013, Microsoft introduced core-based licensing because processors weren’t a precise enough measure (since each processor can have a varying number of cores). Though, you could still get Datacenter and Standard editions under the processor-based licensing model.
Starting with Server 2016, processor-based licensing is no longer available for Datacenter and Standard edition. If you were lucky enough to renew your Software Assurance agreement recently, this won’t apply to you until renewal.
Even then, during renewal, you’ll get 16 core licenses for each applicable on-premise processor license and 8 core licenses for each service provider processor license.
Containers
On the plus side, if you opt for Datacenter or Standard under the core-based licensing model, you’ll now be able to use one of the most talked about features of Server 2016 – containers!
For anyone that’s not familiar with containers, Microsoft considers them “the next evolution of virtualization” and they come in two flavors:
Windows Server containers
Hyper-V containers
With either of the core-based editions for Server 2016, you can run unlimited Windows Server containers by sharing the host kernel. If that’s a security concern for you or your clients, then you’ll want to use Hyper-V containers to isolate the host’s kernel from each container.
Just know that unlike Windows Server containers, you can only run 2 Hyper-V containers on each Standard edition server. If you want unlimited Hyper-V containers, you’ll need Datacenter edition. But whichever choice you make, both types of container can work with Docker.
Windows Defender
When upgrading to Windows Server 2016 from a prior version with antivirus installed, you may run into problems. That’s because the upgrade process installs and enables Windows Defender by default.
Luckily, whether the user interface is enabled or not (which seems to depend on edition), there’s a quick PowerShell command you can run to disable Windows Defender entirely:
Uninstall-WindowsFeature -Name Windows-Server-Antimalware
(Bonus) Modern Lifecycle Policy
While not directly related to Windows Server 2016, here’s a bonus that partners should be aware of: Microsoft has announced their new Modern Lifecycle Policy. For now, this policy only applies to four Microsoft products:
System Center Configuration Manager (current branch)
.NET core
NET
Entity Framework core
The new policy essentially says that Microsoft will only support the current version and once they announce End of Life for a product, you have 12 months before support ends.
Given the heavy push to Microsoft’s new serving model for Windows 10 and now Server 2016, it’s a safe bet that the list of products this policy applies to will grow.
When it comes to the release of Windows Server 2016, there’s a lot to digest (known issues, PowerShell 5.0, WMF 5.1, Just Enough Administration, IIS 10).
Given the number of clients you support that may ask about upgrading older systems or virtualizing, we’re sure you’ll have plenty of opportunity to learn more… but before your clients ask, we wanted you be aware of some of the business and technical nuances.
This post was provided by one of our service providers ConnectWise.
Network Management : Is SNMP here forever?
The first SNMP release came out in 1988. 28 years later, SNMP is still around, a go to Network Management tool … Will this still be the case in 10 years from now? Difficult to say but the odds are lower these days. Why are we predicting SNMP could go away?
If you’re already savvy about SNMP, check out this blog for getting insight into current SNMP limitations and why we are making this prediction.
SNMP was designed to make it simple for the NMS to request and consume data. But those same data models and operations make it difficult for routers to scale to the needs of today’s networks. To understand this, you first need to understand the fundamentals of SNMP.
SNMP stands for Simple Network Management Protocol. It was introduced to meet the growing need for managing IP devices in a standard way. SNMP provides its users with a “simple” set of operations that allows these devices to be managed remotely. SNMP was designed to make it simple for the NMS to request and consume data. But those same data models and operations make it difficult for routers to scale to the needs of today’s networks. To understand this, you first need to understand the fundamentals of SNMP.
For example, you can use SNMP to shut down an interface on your router or check the speed at which your Ethernet interface is operating. SNMP can even monitor the temperature on your router and warn you when it is getting too high.
The overall architecture is rather simple – there are essentially 2 main components (see Figure 1)
NMS is responsible for polling and receiving traps from agents in the network:
How is information actually structured on network devices? A Management Information Base (MIB) is present on every network device. This can be thought as a database of objects that the agent tracks. Any piece of information that can be accessed by the NMS is defined in a MIB.
Managed objects are stored into a treelike hierarchy as described in Figure 2:
The directory branch is actually not used. The management branch (mgmt) defines a standard set of objects that every network device needs to support. The experimental branch is for research purposes only and finally the private branch is for vendors to define objects specific to their devices.
Each managed object is uniquely defined by a name, e.g. an OID (Object Identifier). An object ID consists of a series of integers based on the nodes in the tree, separated by dots (.).
Under the mgmt branch, one can find the MIB-II that is an important MIB for TCP/IP networks. It is defined in RFC 1213 and you can see an extract in Figure 3.
With that mind, the OID for accessing information related to interfaces is: 1.3.6.1.2.1.2 and for information related to system: 1.3.6.1.2.1.1
Finally, there are 2 main SNMP request types to retrieve information.
GET request – request a single value by its Object identifier (see Figure 4)
GET-NEXT request – request a single value that is next in the lexical order from the requested Object Identifier (see Figure 5)
This is a repost of a blog by one of our service partners Cisco.