Application Whitelisting Using Software Restriction Policies

Software Restriction Policies (SRP) allows administrators to manage what applications are permitted to run on Microsoft Windows. SRP is a Windows feature that can be configured as a local computer policy or as a domain policy through Group Policy with Windows Server 2003 domains and above. The use of SRP as a white-listing technique will increase the security feature of the domain by preventing malicious programs from running since the administrators can manage which software or applications are allowed to run on client PCs.

Blacklisting is a reactive technique that does not extend well to the increasing number and variety of malware. There have been many attacks that cannot be blocked by the blacklisting techniques since it uses undiscovered vulnerabilities known as zero-day vulnerabilities.

On the other hand, Application white-listing is a practical technique where only a limited number of programs are allowed to run and the rest of the programs are blocked by default. It makes it hard for attackers to get in to the network since it needs to exploit one of the allowed programs on the user’s computer or get around the white-listing mechanism to make a successful attack. This approach should not be seen as replacement standard security software such as anti virus or firewalls – it is best used in conjunction with these.

Since Microsoft Windows operating systems have SRP functionality built in, administrators can readily configure an application white-listing solution that only allows specific executable files to be run. Service Restriction Policies can also restrict which application libraries are permitted to be used by executable’s.

There are certain recommended SRP settings by NSA Information Assurance Directorate’s (IAD) Systems and Network Analysis Center (SNAC). It is advised to test any configuration changes on a test network or on a small set of test computers to make sure that the settings are correct before implementing the change on the whole domain.

There is known issues on certain Windows versions to consider: for example minor usability issue such as when double-clicking a document, it may not open the associated document viewer application, another is the software update method that allows users to manually apply patches may not function well once SRP is enforced. We may see these issues addressed with a hotfix provided by Microsoft. Automatic updates are not affected by SRP white-listing and will still function correctly. SRP settings should be tested thoroughly due to issues like this to prevent causing a widespread problem in your production environment.

The use of path-based SRP rules are recommended since it has shown unnoticeable performance impact on host after a good deal of testing. Other rules may provide greater security benefits than path-based rules but it has an increased impact on host performance. Other rules like file hash rules are more difficult to manage and needs constant updates each time any files are installed or updated, another is the certificate rules which is somehow limited since not all the applications’ files are digitally signed by their software publishers.

There are certain steps to follow in implementing SRP in Active Directory domain which can be done through the steps below:

1. Review the domain to find out which applications are operating on domain computers.

2. Configure SRP to work in white-listing approach.

3. Choose which applications must be permitted to run and make extra SRP rules as required.

4. Test the SRP rules and form additional rules as needed.

5. Install SRP to sequentially larger Organizational Units until SRP is functional to the entire network.

6. Observe SRP continuously and adjust the rules when needed.

SRP configuration as described above can drastically increase security stance of a domain while continuously letting users to run the applications they need to remain productive for their work.

Security Awareness: A Tale of Two Challenges

SANS Institute has recently releases their findings from a survey ‘Securing The Human 2016’ about Security Awareness that led them to uncover two key findings: First, the security awareness team are not getting enough support they need and second, the experts in the field of security awareness lack soft skills to get the knowledge they have distributed properly.

This is the second annual security awareness report released and its main goal is to allow security awareness officers to make knowledgeable decisions on how to make their security programs better and to let them compare their organizations program to other programs in their industry.

SANS Institute provides information security training all over the world. For over 25 years of experience they are considered as the most trusted and the principal source of information security training. SANS : Securing The Human is an institute division that gives complete and comprehensive security awareness solution to organizations which can help them to effectively manage their human cyber security risk.

Report Summary

This years’ approach tells a story through data, compared to last year where the data and results were presented in the order the survey was taken. The data tells a story about the tale of two challenges which they began to see as they worked through the data.

They conducted a survey on what are the biggest challenges that security officers encountered and the results were tremendous giving them over a 100 different topics. The responses were categorized into 12 categories by Ingolf Becker, from University College of London. The seven problem categories include: resources, adoption, support from management, end user support, finding time to take part, content and not enough staff awareness. They have focused on the first seven on the list which fell into two general groups: lack of resources, time, support and/or not having an impact. The people are either limited on their ability to execute (46%) and/or fails to deliver the needed impact (47%). This starts the tale of two challenges and this report is focused on understanding these challenges and identifying possible solutions.

e Programs Awareness Challenge Biggest o

Categorization of Biggest Challenge Awareness Programs Face

 

Similar to last year’s report, the data showed that a lot of awareness staff has insufficient resources, time and support to get the work completed.

Resources, as defined by Ingolf, are about the shortage of money or technical resources. Budget wise more than 50% of respondents stated that they either have a budget of $5,000 or less or they are not aware if they do have a budget and only 25% reported a budget of $25,000 or more.

Estimated Budget for 2016

Less than 15% of the respondents work full-time in awareness which is an improvement from last year’s 10% it is still considerably low. While there is an improvement only 65% says that they only spend 25% or less of their time on awareness.

Even if the people are getting support for security awareness they do not have or there is only a few metrics considered that demonstrates the human problem, impact or awareness. Most are focused on phishing which is a common top human risk, which is good but this is only one of the many organizational human risk to deal with.

Communication was identified to be the number one blocker in the program. This is more evident in larger organizations where they have 1,000 employees or more. Highly technical people reports to the highly technical department have communications as their biggest blocker even if their main job is to communicate to the organization.

Recommendations

As a recommendation they proposed that communications as one of the most critical soft skills needs to be addressed by training; place someone from the communications department into the awareness team or hire someone with the soft skills they need. As for the engagement, people needs to know why they should care about security awareness and target them at an emotional level rather than giving them statistics and numbers.

Patch Management

Patch Management – Best Practices

Why Does Patch Management Matter?

Simply put, patching is important because of IT governance. As a corporate IT department, you’re held responsible when viruses affect users or applications stop working. It becomes your problem to solve. Securing your organization’s end points against intrusion is your first line of defense. With an increasing number of users working while mobile, simply securing your network through firewalls doesn’t account for company data that’s been taken outside your network perimeter. Proper patching is the best start to securing those devices. Most IT professionals pay attention to security and patching their users’ systems, but how many have a well-honed patch management policy? Patch management is often seen as a trivial task by end users—simply click ‘update’. For administrators, there’s a lot more to it, and a proper policy is certainly not overkill. But what should a patch management policy include apart from deploying patches? Read on to learn how to implement patch management policies, processes and persistence.

1 – Policy

The first step in developing a patch management strategy is to develop a policy that outlines the who, what, how and when of patching your systems. This up-front planning enables you to be proactive instead of reactive. Proactive management anticipates problems in advance and develops policies to deal with them; reactive management adds layer upon layer of hastily thought-up solutions that get cobbled together using bits of string and glue. It’s easy to see which approach will unravel in the event of a crisis. The goal of patch management policy is to effectively identify and fix vulnerabilities. Once you’re notified of a critical weakness, you should immediately know who will deal with it, how it will deployed and how quickly it will be fixed. For example, a simple element of a patch management policy might be that critical or important patches should be applied first.

2 – Discovery

Information comes to you about a newly released patch meant to address a product defect or vulnerability. These notifications can originate from a number of places—LabTech, Automatic Updates, Microsoft’s Security Notification Service. It all depends on which tools you use to monitor and keep your systems up-to-date. In this chapter, we’ll talk about a number of 2 proven tools you can use to manage patching notifications.

3 – Persistence

Policies are useless and processes are futile unless you persist in applying them consistently. Network security requires constant vigilance, not only because new vulnerabilities and patches appear almost daily, but because new processes and tools are constantly being developed to handle the growing problem of keeping systems patched. Effective patch management has become a necessity in today’s information technology environments.

Reasons for this necessity are:

• The ongoing discovery of vulnerabilities in existing operating systems and applications

• The continuing threat of hackers developing applications that exploit those vulnerabilities

• Vendor requirements to patch vulnerabilities via the release of patches.

These points illustrate the need to constantly apply patches to your IT environments. Such a large task is best accomplished following a series of repeatable, automated best practices. Therefore, it’s important to look at patch management as a closed-loop process. It is a series of best practices that have to be repeated regularly on your networks to ensure protection from exposed vulnerabilities.

Patch Management requires:

– Regular rediscovery of systems that may potentially be affected

– Scanning those systems for vulnerabilities

– Downloading patches and patch definition databases

– Deploying patches to systems that need them

4 – Patching Resources

Microsoft updates arrive predictably on Patch Tuesday (the second Tuesday of every month), which means you can plan ahead for testing and deployment. You can get advance notice by subscribing to the security bulletin, which comes out three business days before the release and includes details of the updates. The following is a list of currently available resources you can use when augmenting your patch process, as well as some that can keep you informed of patch-related updates that fall outside the scope of Microsoft updates.

Microsoft Security TechCenter – http://technet.microsoft.com/en-us/security/bb291012.aspx

SearchSecurity Patch News http://searchsecurity.techtarget.com/resources/Security-Patch-Management

Oracle Critical Patch Updates and Security Alerts http://www.oracle.com/technetwork/topics/security/alerts-086861.html

PatchManagement.org (Patch Mailing List) http://www.patchmanagement.org/

Patch My PC (third-party, free patching) http://www.patchmypc.net/

5 – Patching Tools

Client Management Platform Approving and deploying patches on individual machines is simply not scalable. As your organization grows, it is important to utilize a tool that can automate your patch management process, so your technicians aren’t bogged down with the mundane task of individually patching each machine. A client management platform with built-in patch management capabilities can help. When searching for the right tool, remember to look for one that enables you to:

-Identify, approve, update or ignore patches and hotfixes for one or multiple devices at a group level

-Define patch install windows for an individual device or a group of devices

-Schedule patch installation times and patch reboot times

-Create tickets for all successful patch install jobs

-Provide detailed reports of patch install jobs to your management team

 

Third-Party Patching Tools

It is important to ensure timely installation of patches, so security holes remain closed not only in the Windows operating system, but also in software products that are used on desktops and servers. A third-party patching tool such as App-Care or Ninite can be used for obtaining, testing and deploying updates to third-party applications. Be sure to look for a third-party patching tool that integrates seamlessly with your client management platform for increased automation and efficiency.

 

Summary

Patch management is a critical process in protecting your systems from known vulnerabilities and exploits that could result in your organization’s systems being compromised. Viruses and malware are just two examples of aggressors that take advantage of these weaknesses and can be especially destructive and difficult to correct. Patches correct bugs, flaws and provide enhancements, which can prevent potential user impact, improve user experience and save your technicians time researching and repairing issues that could have already been resolved or prevented with an existing update. Users generally understand that their systems need to be patched, but they often do not have the expertise to comfortably approve and install patches without help. Developing best practices to manage the risks associated with the approval and deployment of patches is critical to your IT department’s service offering.

 


This article was provided by our partner Labtech

 

A note on Group Policy and gpudate

When I first started learning about Active Directory, Group Policy always seemed very fickle. Sometimes I could run GPUpdate, other times I had to append the /force option.

Capture2

As it turned out, Group Policy was always working –  I just didn’t understand it. So what’s the difference between GPUpdate and GPUpdate /force? Well –

GPUpdate: Applies any policies that is new or modified

GPUpdate /force: Reapplies every policy, new and old.

So which one should I use? 99% of the time, you should only run gpupdate. If you just edited a GPO and want to see results immediately, running gpupdate will do the trick. In fact, running GPUpdate /force on a large number of computers could adversely affect network resources. This is because these machines will hit a domain controller and reevaluate every GPO applicable to them.

Notice the Group Policy Update option for OUs:

ou-pol

 

How Attackers Use a Flash Exploit to Distribute Malware

Adobe Flash is multimedia software that runs on more than 1 billion systems worldwide. Its long list of security vulnerabilities and huge market presence make it a ‘target-rich environment’ for attackers to exploit. According to Recorded Future, from January 1, 2015 to September 30, 2015, Adobe Flash Player comprised eight of the top 10 vulnerabilities leveraged by exploit kits.

Here is an illustration of just how quickly bad actors can deploy an exploit:

  • May 8 2016: FireEye discovers a new exploit targeting an unknown vulnerability in Flash and reports it to Adobe.
  • May 10 , 2016: Adobe announces a new critical vulnerability (CVE-2016-4117) that affect Windows, Macintosh, Linux, and Chrome OS
  • May 12, 2016: Adobe issues a patch for the new vulnerability (APSB16-15)
  • May 25, 2016: Malwarebytes Labs documents a ‘malvertising’ gang using this exploit to compromise your system via distribution of malware well-known websites and avoid detection

The Malwarebytes blog is a good read, as it provides several examples of how sophisticated malware distribution schemes have become. For example, it breaks down the malicious elements of a rogue advertising banner that the Flash exploit allows attackers to use to push out malware. Among other things, it runs a series of checks to see if the targeted system is running packet analyzers and security technology, to ensure that it only directs legitimate vulnerable systems to the Angler Exploit Kit.

Impact on you

With over 1 billion systems running Adobe Flash, it is likely that one or more systems under your control are vulnerable to this exploit. Fortunately, there is a fix to patch the vulnerability. Unfortunately, according to Adobe, it takes 6 weeks for more than 400 million systems to update to a new version of Flash Player. Six weeks (or however long it takes you to patch Flash) is a long time to be at risk of being compromised by ransomware via the Angler EK.

What is Private Cloud Hosting?

A private cloud is a model of cloud computing in which a partitioned and secure cloud based environment is provided for individual clients. As with any other cloud model, private clouds provide computing power as a service within a visualized environment using an pool of physical computing resource. However, the private cloud model, this computing power is only accessible by a single organisation providing which offers greater control and privacy.

Public and private cloud deployment models differ even with cost where you can Compare Price of Google vs Amazon Cloud & Dedicated Servers. Public clouds, such as those from Amazon Web Services or Google Compute Engine, share a computing infrastructure across different users, business units or businesses. However, these shared computing environments aren’t suitable for all businesses, such as those with mission-critical workloads, security concerns, uptime requirements or management demands. Instead, these businesses can provision a portion of their existing data center as an on-premises — or private — cloud.

A private cloud provides the same basic benefits as that of a public cloud. These include self-service and scalability; multi-tenancy; the ability to provision machines; changing computing resources on-demand; and creating multiple machines for complex computing jobs and business units pay only for the resources they use.

In addition, private cloud from Computers in the City offers hosted services to a limited number of people behind a firewall, so it minimizes the security concerns some organizations have around the cloud. Private cloud also gives companies direct control over their data.

 

Service Desk vs Help Desk Services – What’s the Difference?

Service Desk vs. Help Desk. Hmmm. But…aren’t they the same thing?

If that’s your reaction, you’re not alone. It’s generally agreed there’s some gray area involved. So why make a big deal about it?

While strikingly similar at first glance, a closer and more practical look reveals differences that go beyond tomato-tomahto, potato-potahto wordplay. Because each represents a distinct strategic approach, determining whether you need one or both—and understanding why—can impact how your IT organization operates and satisfies customers. Our goal here is to help uncomplicate the topic with some break-it-down basics.

By Definition

In differentiating between the two, ITIL looks at the IT process from beginning to end, mapping how they should be integrated into the overall business strategy. The service desk is a key component of managing the overall process from a strategic ‘big picture’ cross-organizational perspective. It reviews the overall IT processes and functionality. The help desk feeds into the service desk with a tactical, day-to-day role in responding to end-user needs. An overview of specific functions helps clarify.

Service Desk Focus – Client Strategy

Being the first point of contact in an organization for all IT questions, best practices service desks are process and company strategy focused. Functions can be outlined in five ITIL Core Service Lifecycles:

  • Service Strategy: Evaluate current services, modifying and implementing new as required
  • Service Design: Evaluate new services for introduction into business environment and ability to meet existing/future needs
  • Service Transition: Ensure minimal business interruption during transitions
  • Service Operation: Ongoing monitoring of service delivery
  • Continual Service Improvement: Analyze opportunities to improve IT processes/functions

Help Desk Services Ultimate Goal – First Contact Resolution

The help desk is a component of the service desk, most concerned with end-user functionality and providing incident management to ensure customers’ issues are resolved quickly. Tasks include:

  • Computer or software consultations
  • Change and configuration management
  • Problem escalation procedures
  • Problem resolution
  • Single point of contact (SPOC) for IT interruptions
  • Service level agreements
  • Tracking capabilities of all incoming problems

Do You Need Both?

A help desk is an absolute essential for providing actionable, technically skilled resources for problem resolution. Since a service desk generally takes a more proactive stance, addressing issues of a less urgent technical nature, some companies may not yet have need for its broader offerings.

The Bottom Line

Regardless of strategic and tactical differences, the bottom line is help desks and service desks share a common ‘reason for being.’ Their purpose is to meet the ever-heightening expectations of technology users—both internal and external to your organization—for the best possible service experience. If that goal is being successfully accomplished, you can most likely relax about sweating the semantics.


This article was provided by our partner Labtech

Mobile Device Management

Mobile Device Management: A Growing Trend

From smartphones to tablets, mobile devices in the workplace are here to stay. Employers are happy to let employees access company email and other corporate data from mobile devices, but they often underestimate the security risk to their IT network. Whether your clients embrace a Bring Your Own Device (BYOD) model or provide devices to their employees, it’s critical to protect their IT infrastructure against security breaches and safeguard the confidential information that can be accessed if a mobile device is lost or stolen. Keep your clients safe from mobile security threats with mobile device management (MDM).

IBM MaaS360

IBM MaaS360 is the trusted mobility management solution to thousands of customers worldwide—from small businesses to Fortune 500 companies. It makes working in a mobile world simple and safe by delivering comprehensive mobile security and management of emails, apps, content, Web access and mobile devices. This award-winning platform streamlines the way IT professionals manage and secure the proliferation of mobile devices in the workplace throughout their entire lifecycle.

IBM MaaS360 Integration

LabTech Software and IBM MaaS360 are jointly scoping the development of integration, enabling partners to utilize a single-pane-of-glass to manage users, desktops, servers, virtual systems, network devices and mobile devices.

You can read the full article here

5 Steps to a Stronger Backup Disaster Recovery Plan

Between catastrophic natural events and human error, data loss is a very real threat that no company is immune to. Businesses that experience data disaster, whether it’s due to a mistake or inclement weather, seldom recover from the event that caused the loss.

The saddest thing about the situation is that it’s possible to sidestep disaster completely, specifically when it comes to data loss. You just have to take the time to build out a solid backup disaster recovery (BDR) plan.

Things to consider when developing your BDR plan include: structural frameworks, conducting risk assessments and impact analysis, and creating policies that combine data retention requirements with regulatory and compliance needs.

If you already have a BDR plan in place (as you should), use this checklist to make sure you’ve looked at all the possible angles of a data disaster and are prepared to bounce back without missing a beat. Otherwise, these steps chart out the perfect place to start building a data recovery strategy.

 

1. Customize the Plan

Unfortunately, there’s no universal data recovery plan. As needs will vary per department, it’ll be up to you, and the decision makers on your team, to identify potential weaknesses in your current strategy, and decide on the best game plan for covering all of your bases moving forward.

2. Assign Ownership

Especially in the case of a real emergency, it’s important that everyone on your team know and understand their role within your BDR plan. Discuss the plan with your team, and keep communication open. Don’t wait until the sky turns gray to have this conversation.

3. Conduct Fire Drills

The difference between proactive and reactive plans comes down to consistent checkups. Schedule regular endpoint reviews, alert configuration and backup jobs. Test your plan’s effectiveness with simulated emergency. Find out what works, and what needs improvement, and act accordingly.

4. Centralize Documentation

You’ll appreciate having your offsite storage instructions, vendor contracts, training plans, and other important information in a centralized location. Don’t forget to keep track of frequency and maintenance of endpoint BDR! Which brings us to point 5.

5. Justify ROI

Explore your options. There are many BDR solutions available on the market. Once you’ve identified your business’ unique needs, and assembled a plan of action, do your research to find out what these solutions could do to add even more peace of mind to this effort.

Or, if you’re an employee hoping to get the green light from management to implement BDR at your company, providing documentation with metrics that justify ROI will dramatically increase your likelihood of getting decision-makers on board.

Outside of these 5 components, you should also think about your geographical location and common natural occurrences that happen there. Does it make more sense for you to store your data offsite, or would moving to the cloud yield bigger benefits?

One thing is certain: disaster could strike at any time. Come ready with a plan of action, and powerful tools that will help you avoid missing a beat when your business experiences data loss. At LabTech® by ConnectWise®, we believe in choice, and offer several different BDR solutions that natively integrate to help you mitigate threats and avoid costly mistakes.

This article was provided by our partner Labtech

 

Understanding what NetFlow can do for your network

Traffic on the network can provide valuable insight into many areas of business and technology that would generally go un-noticed unless reported on or analyzed. NetFlow is one very simple technology that can be used to see what is really on your network.

NetFlow can be used to analyze many things such as:

  • Email trend and spam analysis
  • Employee Internet usage
  • Suspicious network activity
  • Legal claims
  • Virus, worm, and spyware detection

…but that is not all. Essentially everything you can create a query for using the parameters NetFlow tracks can be analyzed.

At NetCal, we primarily use this for things like tracking who is over-using an Internet connection, where someone was going on the Internet at a particular time, or looking up what we think might be suspicious network activity.