Cloud services

5 Trends in Enterprise Cloud Services

Even though the future of IT belongs to the cloud, much of the enterprise world is still clinging to legacy systems. In fact, 90 percent of workloads today are completed outside of the public cloud. With this continued resistance to cloud services adoption at the enterprise level, today’s “cloud evangelists” are playing a more important role in the industry than ever before.

The role of a cloud evangelist sits somewhere between the duties of a product marketer and the company’s direct link to customers. These individuals are responsible for spreading the doctrine of cloud computing and convincing reluctant IT admins to make the jump to the cloud. It’s a dynamic role and nowhere is this more apparent than with the talented cloud evangelists at NTT Communications.

In a recent interview from NTT Communications, two of the company’s leading cloud evangelists, Masayuki Hayashi and Waturu Katsurashima, sat down to chat about the current challenges slowing enterprise cloud migration and what companies can do to help mitigate those challenges. In this post, we take a look at the five areas that are top-of-mind for cloud evangelists today.

The Changing Role of Cloud Evangelists

While the role itself may be new, it is not insulated from change. When it was first established, cloud evangelists were responsible for ferrying customers through every stage of cloud adoption. From preliminary fact finding to architecting the final network, cloud evangelists played a prominent role.

Today, that hands-on approach is quickly changing. As Hayashi states in the article, “Evangelists have traditionally played the “forward” position, but recently we are more like “midfielders” who focus on passing the ball to others.” Rather than managing every task involved, evangelists are placing more focus on improving processes and strengthening organizational knowledge.

Slow Migration of On-Premise Systems to the Cloud

The number of enterprises migrating on-premise datacenters to the cloud has been less than stellar in recent years. Cloud services migrations have remained relatively flat year-over-year largely due to the complexities surrounding migration. Enterprises are having difficulty simultaneously managing both their existing infrastructure and bringing new cloud-based services online.

For evangelists, it’s important to consider this added complexity when proposing a migration plan. NTT Communications Chief Evangelist Wataru Katsurashima recommends finding specific solutions that can accommodate the unique needs of enterprise cloud migration such as hybrid cloud infrastructures that function like a single, public cloud environment.

High Migration Costs are Slowing Adoption

The added complexity of enterprise cloud migration also directly affects the price of migration, increasing it beyond reach of some companies. The expensive price tag is forcing many enterprises to rethink their decision to migrate to the cloud, opting instead to delay the migration another year or executing a much slower, gradual migration.

One discussed solution to this high cost is using existing services within the VMware vCloud® Air™ Network. According to Katsurashima, leveraging technology from NTT communications and VMware in a synergistic way can dramatically cut down costs through efficiencies and better utilization.

New Hybrid Cloud Structures on the Horizon

Like the role of cloud evangelists, the hybrid cloud, too, is changing. Hayashi explains:

“In the past, a hybrid cloud was similar to a Japanese hot spring inn, with a single hallway that leads from the hot spring to each guest room. NTT Communications aims to achieve a “modern hotel” model. In other words, there is one front desk and a variety of rooms with different purposes, such as extended-stay rooms for VIPs and rooms for business meetings.”

The hotel analogy provides us with a way to visualize the new features and capabilities that hybrid cloud environments must deliver. Since no one cloud service provider can be everything, establishing greater levels of collaboration and cross-pollination between service providers is critical to success.

cloud management

Redefining the Role of Information System Departments

The days of information system departments only responding to the demands of individual business departments are over. The IT teams that do not help business departments innovate from the inside-out will become increasingly obsolete.

 


This article was provided by our service partner : Vmware

ransomware

Understanding Cyberattacks from WannaCrypt

Cyberattacks are growing more sophisticated, and more common, with every passing day. They present a continuous risk to the security of necessary data, and create an added layer of complication for technology solution providers and in-house IT departments trying to keep their clients’ systems safe and functional. Despite industry efforts and innovations, the world was exposed to this year’s latest cyberattack with ‘WannaCrypt’ on Friday morning.

The attacked—stemming from “WannaCrypt” software—started in United Kingdom and Spain, and quickly spread globally, encrypting endpoint data and requiring a ransom to be paid using Bitcoin to regain access to the data. WannaCrypt exploits used in the attack were derived from the exploits stolen in an attack on the National Security Agency earlier this year.

On March 14, Microsoft® released a security update to patch this vulnerability and protect customers from this quickly spreading cyberattack. While this protected newer Windows™ systems and computers that had enabled Windows to apply this latest update, many computers remained unpatched globally. Hospitals, businesses, governments, and home computers were affected. Microsoft took additional steps to assist users with older systems that are no longer supported.

Our goal at ConnectWise is to provide partners with the tools they need to support their clients and prevent these kinds of attacks from happening. In addition to our core ConnectWise Automate™ solution, that can detect and alert admins of out-of-date systems, we have partnered with numerous vendors who provide ConnectWise certified integrations for security & business availability to block, prevent, or recover from this attack.

See how our vendors are addressing the latest attack:

-> Bitdefender
-> ESET
-> Webroot
-> Malwarebytes
-> VIPRE
-> Acronis
-> StorageCraft

server RAM

Choose the best server RAM configuration

Watch your machine memory configurations – always be care to implement the best server RAM configuration! You can’t just throw RAM at a physical server and expect it to work the best it possibly can. Depending on your DIMM configuration, you might unwittingly slow down your memory speed, which will ultimately slow down your application servers. This speed decrease is virtually undetectable at the OS level – but – anything that leverages lots of RAM to function, including application servers such as a database server, can take a substantial performance hit on performance.

An example of this is if you wish to configure 384GB of RAM on a new server. The server has 24 memory slots. You could populate each of the memory slots with 16GB sticks of memory to get to the 384GB total. Or, you could spend a bit more money to buy 32GB sticks of memory and only fill up half of the memory slots. Your outcome is the same amount of RAM. Your price tag on the memory is slightly higher than the relatively cheaper smaller sticks.

In this configuration, your 16GB DIMM configuration runs the memory 22% slower than if you buy the higher density sticks. The fully populated 16GB stick configuration runs the memory at 1866 MHz. If you only fill in the 32GB sticks on half the slots, the memory runs at 2400 MHz.

Database servers, both physical and virtual, use memory as an I/O cache, improving the performance of the database engine by reducing the dependency on slower storage and leveraging the speed of RAM to boost performance. If the memory is slower, your databases will perform worse. Validate your memory speed on your servers, both now and for upcoming hardware purchases. Ensure that your memory configuration yields the fastest possible performance – implement the best server RAM configuration -your applications will be better for it!

Cisco Umbrella

Healthcare industry embraces Cisco Umbrella

Healthcare industry expenditures on cloud computing will experience a compound annual growth rate of more than 20% by 2020. The industry has quickly transitioned from being hesitant about the cloud to embracing the technology for its overwhelming benefits.

George Washington University, a world-renowned research university, turned to Cisco Umbrella to protect its most important asset: the global reputation as a research leader.

“We chose Cisco Umbrella because it offered a really high level of protection for our various different user bases, with a really low level of interaction required to implement the solution, so we could start blocking attacks and begin saving IR analyst time immediately,” said Mike Glyer, Director, Enterprise Security & Architecture.

 

Customers love Umbrella because it is a cloud-delivered platform that protects users both on and off the network. It stops threats over all ports and protocols for the most comprehensive coverage. Plus, Umbrella’s powerful, effective security does not require the typical operational complexity. By performing everything in the cloud, there is no hardware to install, and no software to manually update. The service is a scalable solution for large healthcare organizations with multiple locations, like The University of Kansas Hospital, ranked among the nation’s best hospitals every year since 2007 by U.S. News & World Report.

“Like every hospital, we prioritize the protection of sensitive patient data against malware and other threats. We have to safeguard all network connected medical devices, as a compromise could literally result in a life-or-death situation,” says hospital Infrastructure Security Manager Henry Duong. “Unlike non-academic hospitals, however, our entwinement with medical school and research facility networks means we must also protect a lot sensitive research data and intellectual Property.”

Like many healthcare providers, The University of Kansas Hospital would spend a lot time combing through gigabytes of logs trying to trace infections, point of origin and identify which machines were calling out.  The team turned to Cisco Umbrella for help.

“First we just pointed our external DNS requests to Cisco Umbrella’s global network, which netted enough information to prompt an instant ‘Wow, we have to have this!’ response,” Duong says. “When our Umbrella trial began, we saw an immediate return, which I was able to document using Umbrella reporting and share with executive stakeholders. Those numbers, which ultimately led to executive buy-in, spoke volumes about the instant effect Umbrella had on our network.”

This overwhelming success led the team to later purchase Umbrella Investigate.

“We suddenly went from struggling to track attacks to being able to correlate users with events and trace every click of their online travels. Then, Cisco Umbrella Investigate gave us the power to understand each threat’s entire story from start to finish,” Duong says. “We’re able to dig deep into the analysis to see what users are doing, where they’re going, and pinpoint any contributing behaviors so we can mitigate most efficiently.”

University of Kansas estimate that with Cisco Umbrella – they have :

  • Decreased threats by an estimated 99 percent
  • Shortened investigation time by 75 percent
  • Increased visibility and automation while reducing exposure to ransomware

This article was provided by our Service Partner : Cisco

Disaster Recovery

Improve your disaster recovery reliability with Veeam

The only two certainties in life are death and taxes. In IT, you can add disasters to this short list of life’s universal anxieties. Ensuring disaster recovery reliability is critical to ensure your organisations enduring viability in your chosen marketplace.

Regardless of the size of your budget, people power and level of IT acumen, you will experience application downtime at some point. Amazon’s recent east coast outage is testimony to the fact that even the best and brightest occasionally stumble.

The irony is that while many organizations make significant investments in their disaster recovery (DR) capabilities, most have a mixed track record, at best, with meeting their recovery service level agreements (SLAs). As this chart from ESG illustrates, only 65% of business continuity (BC) and DR tests are deemed successful.

disaster recovery readiness

In his report, “The Evolving Business Continuity and Disaster Recovery Landscape,” Jason Buffington broke down respondents to his DR survey into two camps: “green check markers” and “red x’ers.”

Citing his research, Jason recently shared with me: “Green Checkers assuredly don’t test as thoroughly, thus resulting in a higher passing rate during tests, but failures when they need it most — whereas Red X’ers are likely get a lower passing rate (because they are intentionally looking for what can be improved), thereby assuring a more likely successful recovery when it really matters. One of the reasons for lighter testing is seeking the easy route — the other is the cumbersomeness of testing. If it wasn’t cumbersome, most of us would likely test more.”

DR testing can indeed be cumbersome. In addition to being time consuming, it can also be costly and fraught with risk. The risk of inadvertently taking down a production system during a DR drill is incentive enough to keep testing to a minimum.

But what if there was a cost-effective way to do DR testing that mitigates risk and dramatically reduces the preparation work and the time required to test the recoverability of critical application services?

By taking the risk, cost and hassle out of testing application recoverability, Veeam’s On-Demand Sandbox for Storage Snapshots feature is a great way for organizations to leverage their existing investments in NetApp, Nimble Storage, Dell EMC and Hewlett Packard Enterprise (HPE) Storage to attain the following three business benefits:

  1. Risk mitigation: Many IT decision makers have expressed concerns around their ability to meet end-user SLAs. By enabling organizations to rapidly spin-up virtual test labs that are completely isolated from production, businesses can safely test their application recoverability and proactively address any of their DR vulnerabilities.
  2. Improved ROI: In addition to on-demand DR testing, Veeam can also be utilized to instantly stand-up test/dev environments on a near real-time copy of production data to help accelerate application development cycles. This helps to improve time-to-market while delivering a higher return on your storage investments.
  3. Maintain compliance: Veeam’s integration with modern storage enables organizations to achieve recovery time and point objectives (RTPO) of under 15 minutes for all applications and data. Imagine showing your IT auditor in real-time how quickly you can recover critical business services. For many firms, this capability alone would pay for itself many times over.

Back when I was in school, 65% was considered a passing grade. In the business world, a 65% DR success grade is literally flirting with disaster. DR proficiency may require lots of practice but it also requires Availability software, like Veeam’s, that works hand-in-glove with your storage infrastructure to make application recoveries simpler, more predictable and less risky.


This article was provided by our service partner Veeam.

veeam

Veeam : Ransomware resiliency – The endpoint is a great place to start

Fighting ransomware has become a part of doing business today. Technology professionals around the world are advocating many ways to stay resilient. The most effective method is to have end-user training on how to handle and operate attachments and connectivity to the Internet. One other area to look is frequent endpoint devices: Laptops and PCs.

Veeam has taken ransomware resiliency seriously for a while. We’ve put out a number of posts such as early tips for some of the first attacks and some practical tips when using Veeam Backup & Replication. Now with Veeam Agent for Linux and Veeam Endpoint Backup FREE available as well as Veeam Agent for Microsoft Windows (coming VERY soon) as options for laptops and PCs, it’s time to take ransomware resiliency seriously on these devices.

Before I go too far, it’s important to note that ransomware can exist on both Windows and Linux systems. Additionally, ransomware is not just a PC problem (see recent survey blogpost), as at Veeam we see it nearly every day in technical support for virtual machines. We’ll see more content coming for the virtual machine side of the approach for most resiliency, in this post I’ll focus on PCs and Laptops.

Veeam Agent for Linux is the newest product in which Veeam has offered image-based Availability for non-virtualized systems. Veeam Agent for Linux is a great way to do backups of many different Linux systems with a very intuitive user interface:

veeam linux agent

For ransomware resiliency for Veeam Agent for Linux, putting backups on a different file system will be very easy to do with the seamless integration with Veeam Availability Suite. In this way, backups of Veeam Agent for Linux systems can be placed in Veeam Backup & Replication repositories. They also can be used in the Backup Copy Job function. This way, the Linux backups can be placed on different file systems to avoid propagation of ransomware across the source Linux system and the backups. The Backup Copy Job of Veeam Agent for Linux is shown below writing Linux backups to a Windows Server 2016 ReFS backup repository:

veeam backup copy config

Now, let’s talk about Microsoft operating systems and resiliency against ransomware when it comes to backups. Veeam Endpoint Backup FREE will soon be renamed to Veeam Agent for Microsoft Windows. Let’s explain this changing situation here briefly. Veeam Endpoint Backup FREE was announced at VeeamON in 2014 and since it has been available, it has been downloaded over 1,000,000 times. From the start, it has always provided backup Availability for desktop and server-class Windows operating systems. However, it didn’t have the application-aware image processing support and technical support service. Veeam Agent for Microsoft Windows will introduce these key capabilities as well as many more.
For Veeam Agent for Microsoft Windows, you also can put backups on several different storage options. Everything from NAS systems to removable storage, a Linux path, tape media, a deduplication appliance when integrated with Veeam Availability Suite and more. The removable storage is of interest as it may be the only realistic option for many PC or laptop systems. A while ago, Veeam implemented a feature to eject removable media at the completion of a backup job. This option is available in the scheduling option and when the backup target is a removable media and is shown below:

veeam backup schedule

This simple option can indeed make a big difference. We even had a user share a situation where ransomware encrypted one’s backups. This underscores a need for completely offline backups or otherwise some form of an “air gap” between backup data and production systems. Thus, behave as if when you have ransomware in your organization the only real solution is to restore from backup after it is contained. There is a whole practice of inbound detection and prevention but if it gets in, backup is your only option. Having media eject offline is another mechanism that even with isolated PCs and laptops can have more Availability by having the backup storage offline.
Availability in the ransomware era is a never-ending practice of diligence and configuration review. Additionally, the arsenal of threats will always become more sophisticated to meet our new defenses.


This post was provided by our service partner : Veeam

cyber secuirty

Cyber Security: Cyber-Threat Trends to Watch for in 2017

Faced with the volume and rapid evolution of cyber threats these days, technology solution providers (TSPs) may find offering cyber security to be a daunting task. But with the right knowledge to inform your security decisions, and the right solutions and mitigation strategies in place, organizations like yours can keep customers ahead of the rushing malware tide.

The Webroot team recently released the latest edition of their annual Threat Report, which gives crucial insight into the latest threat developments based on trends observed over the last year, the challenges they bring, and how to defeat them. Let’s review 2016’s Threat Report highlights.

The New Norm: Polymorphism

In the last few years, the biggest trend in malware and potentially unwanted applications (PUAs) observed by Webroot has been polymorphic executables. Polymorphic spyware, adware, and other attacks are generated by attackers so that each instance is unique in an effort to defeat traditional defense strategies.

Traditional cyber security relies on signatures that detect one instance of malware delivered to a large number of people. It’s virtually useless for detecting a million unique malware instances as they are delivered to the same number of people. Signature-based approaches will never be fast enough to prevent polymorphic breaches.

During 2016, approximately 94% of the malware and PUA executables observed by Webroot were only seen once, demonstrating how prevalent polymorphism is. Oddly enough, however, the percentage of malicious executables related to malware and PUAs has dropped significantly over the past 3 years, a 23% and 81% decline, respectively.

While this decline in the volume of new malware encountered by Webroot customers is a decidedly positive trend, TSPs and their customers should continue to treat malware as a major threat. Approximately one in every 40 new executable file instances observed in 2016 was malware. These types of files are customized and often designed to target individuals, and cannot be stopped by traditional antimalware technologies.

Ransomware Continues to Rise

You’ve probably heard about at least one of the numerous ransomware attacks that have crippled hospitals and other institutions. According to the FBI, cyber criminals were expected to collect over $1 billion in ransoms during 2016.[1] It’s quite likely that actual losses suffered were even higher, given the disruption of productivity and business continuity, as well as a general reluctance to report successful ransomware attacks.

In 2017, Webroot anticipates that ransomware will become an even larger problem. According to the Webroot cyber security Threat Research team, the following are 3 ransomware trends to be aware of:

Locky, the most successful ransomware of 2016

In its first week in February 2016, Locky infected over 400,000 victims, and has been estimated to have earned over $1 million a day since then.[2] Throughout 2016, Locky evolved not only to use a wide variety delivery methods, but also to camouflage itself to avoid detection and to make analysis more difficult for security researchers. Locky shows no signs of slowing down, and is likely to be equally prolific in the coming year.

Exploit Kits

The second important trend involves the frequent changes in the exploit kits ransomware authors use. As an example, most exploit kit ransomware in the first half of 2016 was distributed using Angler or Neutrino. By early June, Angler-based ransomware had virtually disappeared, as cybercriminals began switching to Neutrino. A few months later, Neutrino also disappeared. Toward the end of 2016, the most commonly used exploit kits were variants of Sundown and RIG, most of which support Locky.

Ransomware as a Service (Raas)

Despite having emerged in 2015, ransomware-as-a-service (RaaS) didn’t find its place in the ransomware world until 2016. RaaS enables cybercriminals with neither the resources nor the know-how to create their own ransomware and easily generate custom attacks. The original authors of the RaaS variant being used gets a cut of any paid ransoms. RaaS functions similarly to legitimate software, with frequent updates and utilities to help distributors get the most out of their service. The availability and ease of RaaS likely means even greater growth in ransomware incidents.

Stay Informed

The best defense is knowing your enemy. Download the complete 2017 Webroot Threat Report to get in-depth information on the trends we’ve explored above, as well as other crucial insights into phishing, URL, and mobile threats.

[1] http://money.cnn.com/2016/04/15/technology/ransomware-cyber-security/index.html

[2] http://www.smartdatacollective.com/david-balaban/412688/locky-ransomware-statistics-geos-targeted-amounts-paid-spread-volumes-and-much-

————————————————————————————————————————–
The information above was provided by our service partner : Webroot.

Managed Voip

Considerations when Picking a Managed VoIP PBX

Not all things are created equal, and when considering a new phone system for your business, not all Cloud Based Managed VoIP Providers are the same. Before you sign a contract, there can be huge differences among Hosted VoIP Providers.

Features – What features are most important to your business? Does your business need auto attendant, voicemail sent to an email, mobile twinning (sending calls to both a cell phone and desk phone at the same time)? Does the receptionist want to see who is on the phone? How about the ability of having “hot desking”, the ability of logging into anyone’s phone and have it appear as your own. This feature works great for medical practices who have rotating staff working the front desk. Don’t forget to ask office workers what features they could use.

Equipment – What about the brand of phones that are used? Is the equipment proprietary or can it be used with other Managed VoIP services providers? Should you purchase the equipment or rent each handset and what are the advantages? Make sure you are getting quality VoIP phones from a quality manufacture, the last thing you want to happen is finding out the phones you bought are not good quality. Does each user on the system need a fancy phone with lots of features, most employees only use two or three features. Do you really need a cool looking conference room phone or will a basic handset do the trick? Many newer phones have excellent speaker phones, so a basic handset may work fine. A good provider should be able to offer multiple phone options as your business grows and expands.

Pricing – Many providers offer confusing or different pricing options. Some offer unlimited options that may be simple to understand but you pay for features not needed. Another consideration is whether to rent or buy phones. With some customers it makes sense to buy, but what happens when the phone breaks, who is responsible? The cost of renting phones has dropped dramatically, however pricing and features vary greatly. Make sure you understand how the companies long distance calling is priced; contrary to what many believe, Hosted VoIP is not free phone service.

Call Quality – This is where customers get burned and have poor VoIP call quality and get disappointed. It is important to know the difference between BYOY (Bring you own Bandwidth) compared to “managed VoIP” using a private MPLS data network. Some carriers provide an extra layer of call quality by using a managed router. Make sure you know the difference between managed and unmanaged services, there can be a big difference in call quality.

Vendor Experience – This is one of the most important considerations when considering a Managed VoIP phone system. VoIP (Voice Over Internet Protocol) has been around for many years and many service providers are now selling Hosted VoIP via the internet, out of car trunks, basements and garages. It would be disastrous for a business if the phone company went out of business and had control of your phone numbers? Make sure you find out how long the Hosted PBX provider has been in business, how many customers they support and the types of customers.

Disaster Recovery – It is very important to make sure you understand the providers network and how many POP’s (point-of-presence) they own and manage. Does the hosted PBX provider have built in intelligence that can determine when a business’s on-site phones stop working and can they re-rout calls to different numbers? How many network operation centers does the provider have, east and west coast only?

Summary – Managed VoIP PBX offers advanced features previously only available to much larger business all for a great value. Hosted or Cloud PBX phone service compared to traditional solutions offers no-hassle phone service without ongoing maintenance, service contracts, costly hardware and onsite trip charges. While a hosted PBX offers customers ease of management; an onsite or Premise PBX is can still be a more cost efficient solution.

Cloud Security

Report Uncovers Cloud Security Concerns and Lack of Security Expertise Slows Cloud Adoption

Crowd Research Partners yesterday (28th March 2017) released the results of its 2017 Cloud Security Report revealing that security concerns, lack of qualified security staff and outdated security tools remain the top issues keeping cyber security professionals up at night, while data breaches are at an all-time high.

Based on a comprehensive online survey of over 1,900 cyber security professionals in the 350,000-member Information Security Community on LinkedIn, the report has been produced in conjunction with leading cloud security vendors AlienVault, Bitglass, CloudPassage, Cloudvisory, Dome9 Security, Eastwind Networks, Evident.io, (ISC)2, Quest, Skyhigh, and Tenable.

“While workloads continue to move rapidly into the cloud, security concerns remain very high,” said Holger Schulze, founder of the 350,000-member Information Security Community on LinkedIn. “With a third of organizations predicting cloud security budgets to increase, today’s cloud environments require more than ever security-trained, certified professionals and innovative security tools to address the concerns of unauthorized access, data and privacy loss, and compliance in the cloud.”

Key takeaways from the report include:

  • Cloud security concerns top the list of barriers to faster cloud adoption. Concerns include protection against data loss (57 percent), threats to data privacy (49 percent), and breaches of confidentiality (47 percent).
  • Lack of qualified security staff is the second biggest barrier to cloud adoption, and more than half of organizations (53 percent) are looking to train and certify their current IT staff to address the shortage, followed by partnering with a managed service provider (MSP) (30 percent), leveraging software solutions (27 percent), and hiring dedicated staff (26 percent).
  • As more workloads move to the cloud, organizations are realizing that traditional security tools are not designed for the unique challenges cloud adoption presents (78 percent). Instead, strong security management and control solutions designed specifically for the cloud are required to protect the new, agile paradigm.
  • Visibility into cloud infrastructure is the single biggest security management headache for 37 percent of respondents, moving up to the top spot from being the second ranking operational concern in the previous year.

Download the complete 2017 Cloud Security Report here.

Linux Patch Management

The Importance of Linux Patch Management

In recent news there have been a number of serious vulnerabilities found in various Linux systems. Whilst OS vulnerabilities are a common occurrence, it’s the nature of these that have garnered so much interest. Linux patch management should be considered a priority in ensuring the security of your systems.

The open-source Linux operating system is used by most of the servers on the internet as well as in smartphones, with an ever-growing desktop user base as well.

Open-source software is typically considered to increase the security of an operating system, since anyone can read, re-use and suggest modifications to the source code – part of the idea being that many people involved would increase the chances of someone finding and hopefully fixing any bugs.

With that in mind let’s turn our sights on the bug known as Dirty Cow (CVE-2016-5195) found in October – named as such since it exploits a mechanism called “copy-on-write” and falls within the class of vulnerabilities known as privilege escalation. This would allow an attacker to effectively take control of the system.

What makes this particular vulnerability so concerning however isn’t the fact that it’s a privilege escalation bug, but rather that it was introduced into the kernel around nine years ago. Exploits already taking advantage of Dirty Cow were also found after the discovery of the bug by Phil Oester. This means that a reliable means of exploitation is readily available, and due to its age, it will be applicable to millions of systems.

Whilst Red Hat, Debian and Ubuntu have already released patches, millions of other devices are still vulnerable – worse still is the fact that between embedded versions of the operating and older Android devices, there are difficulties in applying the updates, or they may not receive any at all, leaving them vulnerable.

Next, let’s have a look at a more recent vulnerability which was found in Cryptsetup (CVE-2016-4484), which is used to set up encrypted partitions on Linux using LUKS (Linux Unified Key Setup). It allows an attacker to obtain a root initramfs shell on affected systems. At this point, depending on the system in question, it could be used for a number of exploitation strategies according to the researchers whom discovered the bug, namely:

  • Privilege escalation: if the boot partition is not encrypted:
    — It can be used to store an executable file with the bit “SetUID” enabled. Which can later be used to escalate privileges by a local user.
    — If the boot is not secured, then it would be possible to replace the kernel and the initrd image.
  • Information disclosure: It is possible to access all the disks. Although the system partition is encrypted it can be copied to an external device, where it can be later be brute forced. Obviously, it is possible to access to non-encrypted information in other devices.
  • Denial of service: The attacker can delete the information on all the disks, causing downtime of the system in question.

Whilst many believe the severity and/or likely impact of this vulnerability has been exaggerated considering you need physical or remote console access (which many cloud platforms provide these days), what makes it so interesting is just how it is exploited.

All you need to do is repeatedly hit the Enter key at the LUKS password prompt until a shell appears (approximately 70 seconds later) – the vulnerability is as a result of incorrect handling of password retries once the user exceeds the maximum number (by default 3).

The researchers also made several notes regarding physical access and explained why this and similar vulnerabilities remain of concern. It’s generally accepted that once an attacker has physical access to a computer, it’s pwned. However, they highlighted that with the use of technology today, there are many levels of what can be referred to as physical access, namely:

  • Access to components within a computer – where an attacker can remove/replace/insert anything including disks, RAM etc. like your own computer
  • Access to all interfaces – where an attacker can plug in any devices including USB, Ethernet, Firewire etc. such as computers used in public facilities like libraries and internet cafes.
  • Access to front interfaces – usually USB and the keyboard, such as systems used to print photos.
  • Access to a limited keyboard or other interface – like a smart doorbell, alarm, fridge, ATM etc.

Their point is that the risks are not limited to traditional computer systems, and that the growing trends around IoT devices will increase the potential reach of similar attacks – look no further than our last article on DDoS attacks since IoT devices like printers, IP cameras and routers have been used for some of the largest DDoS attacks ever recorded.

This brings us back around to the fact that now, more than ever, it’s of critical importance that you keep an eye on your systems and ensure any vulnerabilities are patched accordingly, and more importantly – in a timely manner. Linux patch management should be a core consideration for all IT systems, whether they are servers or workstations, and of course regardless of the operating systems used.

This article was provided by our service partner ESET