Scam apps

How to protect yourself as the threat of scam apps grows

As the threat of bogus apps continues, what can we do to protect ourselves against these fraudulent practices?

There’s nothing new about advertisers and app developers using deceptive practices, but the Touch ID scam that Lukáš Štefanko wrote about recently is a significant twist in this ongoing story. Of course, iOS users are not alone in facing these dilemmas; as Lukáš wrote earlier this year, Android users are experiencing their own flood of predatory app tactics too.

What can we do to protect ourselves against these fraudulent practices?

Be aware of the limitations of app store review processes

The policies and review procedures of major app stores do keep out a large number of fraudulent apps. While there are always more things they might and probably should be doing to continue to improve this problem, it is an ongoing learning process for all of us.

Due to the incredibly large total number of apps and updates that each major app store sees every day, much of the work involved in the review of new submissions is automated. This means that each app likely has functionality that will not necessarily be seen by a human or be tested specifically. Even very well-known and more-or-less legitimate app vendors have been caught doing things to try to evade having certain functionality reviewed. This means it’s still crucial to do our own due diligence.

Read reviews

While most scam apps do in fact include numerous positive reviews, these often show signs of phoniness. Wording may be very vague, downright nonsensical, or exhibit repetitive patterns (including different reviews repeating the same phrases or having similar usernames, for example). It’s a good idea to re-order the ranking options on reviews to see a more balanced picture: depending on the particular app store, you can sort the reviews to see those that have been deemed “most helpful” or that are ranked “most critical” first.

Be patient

The best time to figure out whether an app is a scam is before you download it. While it may be hard to calm the fear of missing out, it’s best to wait a few days or weeks before downloading brand new apps, to let other people be the “guinea pigs”. This way you can read what other people have to say about the app’s functionality before making a decision.

Use apps by developers you know and trust

If at all possible, it’s a good idea to stick with reputable app developers. If you’re new to a platform, that may be easier said than done. In that case, it’s a good idea to do a little more research first, to get a better sense of whether a particular developer already has other well-reviewed and popular apps that are currently available for download.

Be aware of valid functionality

While it can be hard to keep up with the complete picture of what each new device can do, it’s a good idea to be at least somewhat aware of the functionality of your device. For example: fingerprint data are not accessible to apps, only a “yes” or “no” verdict about whether your fingerprint matches the one previous stored on your device. This is to say that apps cannot use a scan of your finger to give advice on calorie data, nutrition information, how much water you should drink, or to present ancestry analysis. (It’s worth noting that you couldn’t really get valid information on any of those things from a scan of your finger even if the app could access those data.)

If your phone has existing functionality like a QR reader or a flashlight app, it might not be a good idea to install an app that does that exact same thing, especially as many of these apps have a history of being problematic. If you’re looking to specifically try a different app than one your phone already has – like a mail reader or an internet browser – be sure to read some third party reviews first, to see which options are well-reviewed and popular.

Dig deeper

There are a variety of things you can look at to find information that might indicate a predatory app. Do the developers have other apps available already, and are they reviewed well? Do they have a website that appears professional, including contact information? What results are returned if you do an internet search for the name of the app or developer plus the word “scam”? Can you find more information on third-party sources regarding subscription rates or in-app purchase prices? (Apple may offer information about the latter within the app description.) Does the app purport to give you a free or discount version of more expensive for-fee app? (These scams often cost more than just money!)

Request a refund and report bad actors

If you’ve gotten as far as having already downloaded an app that turned out to be a scam, ask the app store or the bank attached to your payment card to refund the charge. If the purchase was in the form of a subscription, this may be more complicated, but it will soon become worth your time and effort to have gone through the entire process. You can also report fraudulent apps to the app stores themselves, as well as contributing reviews that describe your experience.

It’s time to push back against “dark patterns”

Many of us already vote with our wallets when it comes to sub-optimal software behavior, by choosing not to purchase or support companies that fail to consider privacy or security, or that behave in ways that we consider too predatory or problematic. But there is another area that more people should be aware of, that describes a more understated category of sketchy behavior.

Dark patterns” describe the scenario where a user interface is designed to intentionally trick or emotionally manipulate you into clicking where otherwise you might not. In the case of the Fitness Balance app, it takes advantage of the fact that the Home button on some iPhones or iPads can serve two purposes: your finger is already resting on a (fingerprint) sensor in a way that can also be used to select an option on the screen. Newer versions of the iPhone require you to make two distinct actions for these things; you must take your finger off the sensor for a moment after a fingerprint scan, before it can be used to select an option.

Some dark patterns are much less obvious, because they take advantage of expectations that we may not be consciously aware that we have, or because they cause us to be more inattentive.

Here are a few examples of scenarios in user interfaces that predatory app makers may try to manipulate:

  • we expect an “Accept” option to be the bigger or more obvious one
  • we may rush decisions if we’re overwhelmed or frustrated
  • we may be less cautious of what’s on our screen if we’re trying to brush away detritus
  • in many cultures, we expect red to mean “stop” and green to mean “go”
  • we expect a “close” button to appear in certain predictable locations
  • buttons may be labeled in ways that makes their meaning unclear

In cases where emotional manipulation is in play, there may be a confirmation dialog that tries to guilt-trip or scare you into changing a selection. This is where things can get a little nebulous: when is it a legitimate warning, rather than unnecessary fearmongering? This can be something of a value judgment, which is subject to our own interpretation. Whatever you decide, you can let software vendors know that you value a clear and predictable user experience that does not rely on fear, uncertainty and doubt.


This article was provided by our service partner Eset

webroot

What’s Next? Webroot’s 2019 Cybersecurity Predictions

At Webroot, we stay ahead of cybersecurity trends in order to keep our customers up-to-date and secure. As the end of the year approaches, our team of experts has gathered their top cybersecurity predictions for 2019. What threats and changes should you brace for?

General Data Protection Regulation Penalties

“A large US-based tech company will get hammered by the new GDPR fines.” – Megan Shields, Webroot Associate General Counsel

When the General Data Protection Regulation (GDPR) became law in the EU last May, many businesses scrambled to implement the required privacy protections. In anticipation of this challenge for businesses, it seemed as though the Data Protection Authorities (the governing organizations overseeing GDPR compliance) were giving them time to adjust to the new regulations. However, it appears that time has passed. European Data Protection Supervisor Giovanni Buttarelli spoke with Reuters in October and said the time for issuing penalizations is near. With GDPR privacy protection responsibilities now incumbent upon large tech companies with millions—if not billions—of users, as well as small to medium-sized businesses, noncompliance could mean huge penalties.

GDPR fines will depend on the specifics of each infringement, but companies could face damages of up to 4% of their worldwide annual turnover, or up to 20 million Euros, whichever is greater. For example, if the GDPR had been in place during the 2013 Yahoo breach affecting 3 billion users, Yahoo could have faced anywhere from $80 million to $160 million in fines. It’s also important to note that Buttarelli specifically mentions the potential for bans on processing personal data, at Data Protection Authorities’ discretion, which would effectively suspend a company’s data flows inside the EU.

AI Disruption

“Further adoption of AI leading to automation of professions involving low social intelligence and creativity. It will also give birth to more advanced social engineering attacks.” – Paul Barnes, Webroot Sr. Director of Product Strategy

The Fouth Industrial Revolution is here and the markets are beginning to feel it. Machine learning algorithms and applied artificial intelligence programs are already infiltrating and disrupting top industries. Several of the largest financial institutions in the world have integrated artificial intelligence into aspects of their businesses. Often these programs use natural language processing—giving them the ability to handle customer-facing roles more easily—to boost productivity.

From a risk perspective, new voice manipulation techniques and face mapping technologies, in conjunction with other AI disciplines, will usher in a new dawn of social engineering that could be used in advanced spear-phishing attacks to influence political campaigns or even policy makers directly.

AI Will Be Crucial to the Survival of Small Businesses

“AI and machine learning will continue to be the best way to respond to velocity and volume of malware attacks aimed at SMBs and MSP partners.” – George Anderson, Product Marketing Director

Our threat researchers don’t anticipate a decline in threat volume for small businesses in the coming year. Precise attacks, like those targeting RDP tools, have been on the rise and show no signs of tapering. Beyond that, the sheer volume of data handled by businesses of all types of small businesses raises the probability and likely severity of a breach.

If small and medium-sized businesses want to keep their IT teams from being inundated and overrun with alerts, false positives, and remediation requests, they’ll be forced to work AI and machine learning into their security solutions. Only machine learning can automate security intelligence accurately and effectively enough to enable categorization and proactive threat detection in near real time. By taking advantage of cloud computing platforms like Amazon Web Services, machine learning has the capability to scale with the increasing volume and complexity modern attacks, while remaining within reach in terms of price.

Ransomware is Out, Cryptojacking is In

We’ll see a continued decline in commodity ransomware prevalence. While ransomware won’t disappear, endpoint solutions are better geared to defend against suspicious ransom-esque actions and, as such, malware authors will turn to either more targeted attacks or more subtle cryptocurrency mining alternatives.” – Eric Klonowski, Webroot Principal Threat Research Analyst

Although we’re unlikely to see the true death of ransomware, it does seem to be in decline. This is due in large part to the success of cryptocurrency and the overwhelming demand for the large amounts of computing power required for cryptomining. Hackers have seized upon this as a less risky alternative to ransomware, leading to the emergence of cryptojacking.

Cryptojacking is the now too-common practice of injecting software into an unsuspecting system and using its latent processing power to mine for cryptocurrencies. This resource theft drags systems down, but is often stealthy enough to go undetected. We are beginning to feel the pinch of cryptojacking in critical systems, with a cryptomining operation recently being discovered on the network of a water utility system in Europe. This trend is on track to continue into the New Year, with detected attacks increasing by 141% in the first half of 2018 alone.

Targeted Attacks

“Attacks will become more targeted. In 2018, ransomware took a back seat to cryptominers and banking Trojans to an extent, and we will continue see more targeted and calculated extortion of victims, as seen with the Dridex group. The balance between cryptominers and ransomware is dependent upon the price of cryptocurrency (most notably Bitcoin), but the money-making model of cryptominers favors its continued use.” – Jason Davison, Webroot Advanced Threat Research Analyst

The prominence of cryptojacking in cybercrime circles means that, when ransomware appears in the headlines, it will be for calculated, highly-targeted attacks. Cybercriminas are now researching systems ahead of time, often through backdoor access, enabling them to encrypt their ransomware against the specific antivirus applications put in place to detect it.

Government bodies and healthcare systems are prime candidates for targeted attacks, since they handle sensitive data from large swaths of the population. These attacks often have costs far beyond the ransom itself. The City of Atlanta is currently dealing with $17 million in post-breach costs. (Their perpetrators asked for $51,000 in Bitcoin, which the city refused to pay.)

The private sector won’t be spared from targeting, either. A recent Dharma Bip ransomware attack on a brewery involved attackers posting the brewery’s job listing on an international hiring website and submitting a resume attachment with a powerful ransomware payload.

Zero Day Vulnerabilities

“Because the cost of exploitation has risen so dramatically over the course of the last decade, we’ll continue to see a drop in the use of zero days in the wild (as well as associated private exploit leaks). Without a doubt, state actors will continue to hoard these for use on the highest-value targets, but expect to see a stop in Shadowbrokers-esqueoccurrences. Leaks probably served as a powerful wake-up call internally with regards to access to these utilities (or perhaps where they’re left behind). – Eric Klonowski, Webroot Principal Threat Research Analyst

Though the cost of effective, zero-day exploits is rising and demand for these exploits has never been higher, we predict a decrease in high-profile breaches. Invariably, as large software systems become more adept at preventing exploitation, the amount of expertise required to identify valuable software vulnerabilities increases with it. Between organizations like the Zero Day Initiative working to keep these flaws out of the hands of hackers and governmental bodies and intelligence agencies stockpiling security flaws for cyber warfare purposes, we are likely to see fewer zero day exploits in the coming year.

However, with the average time between the initial private discovery and the public disclosure of a zero day vulnerability being about 6.9 years, we may just need to wait before we hear about it.

The take-home? Pay attention, stay focused, and keep an eye on this space for up-to-the-minute information about cybersecurity issues as they arise.


This article was provided by our service partner : Webroot

Cybersecurity Awareness

Reducing Risk with Ongoing Cybersecurity Awareness Training

Threat researchers and other cybersecurity industry analysts spend much of their time trying to anticipate the next major malware strain or exploit with the potential to cause millions of dollars in damage, disrupt global commerce, or put individuals at physical risk by targeting critical infrastructure.

However, a new Webroot survey of principals at 500 small to medium-sized businesses (SMBs), suggests that phishing attacks and other forms of social engineering actually represent the most real and immediate threat to the health of their business.

Twenty-four percent of SMBs consider phishing scams as their most significant threat, the highest for any single method of attack, and ahead of ransomware at 19 percent.

Statistics released by the FBI this past summer in its 2017 Internet Crime Report reinforce the scope of the problem. Costing nearly $30 million in total losses last year, phishing and other social engineering attacks were the third leading crime by volume of complaints, behind only personal data breaches and non-payment/non-delivery of services. Verizon Wireless’s 2018 Data Breach Investigations Report, a thorough and well-researched annual study we cite often, blames 93 percent of successful breaches on phishing and pretexting, another social engineering tactic.

Cybersecurity Awareness Training as the Way Forward

So how are businesses responding? In short, not well.

24 percent of principals see phishing scams as the number one threat facing their business. Only 35 percent are doing something about it with cybersecurity awareness training.

One of the more insidious aspects of phishing as a method of attack is that even some otherwise strong email security gateways, network firewalls and endpoint security solutions are often unable to stop it. The tallest walls in the world won’t protect you when your users give away the keys to the castle. And that’s exactly what happens in a successful phishing scam.

Despite this, our survey found that 65 percent of SMBs reported having no employee training on cybersecurity best practices. So far in 2018, World Cup phishing scams, compromised MailChimp accounts, and opportunist GDPR hoaxers have all experienced some success, among many others.

So, can training change user behavior to stop handing over the keys to the castle? Yes! Cybersecurity awareness training, when it includes features like realistic phishing simulations and engaging, topical content, can elevate the security IQ of users, reducing user error and improving the organization’s security posture along the way.

The research and advisory firm Gartner maintains that applied examples of cybersecurity awareness training easily justify its costs. According to their data, untrained users click on 90 percent of the links within emails received from outside email addresses, causing 10,000 malware infections within a single year. By their calculations, these infections led to an overall loss of productivity of 15,000 hours per year. Assuming an average wage of $85/hr, lost productive costs reach $1,275,000 which does not necessarily account for other potential costs such as reputational damage, remediation cost, or fines associated with breaches.

One premium managed IT firm conducted its first wave of phishing simulation tests and found their failure rate to be approximately 18 percent. But after two to three rounds of training, they saw the rate drop to a much healthier 3 percent.1

And it’s not just phishing attacks users must be trained to identify. Only 20 percent of the SMBs in our survey enforced strong password management. Ransomware also remains a significant threat, and there are technological aspects to regulatory compliance that users are rarely fully trained on. Even the most basic educational courses on these threats would go a long way toward bolstering a user’s security IQ and the organizations cybersecurity posture.

Finding after finding suggests that training on cybersecurity best practices produces results. When implemented as part of a layered cybersecurity strategy, cybersecurity awareness training improves SMB security by reducing the risks of end-user hacking and creating a workforce of cyber-savvy end users with the tools they need to defend themselves from threats.

All that remains to be seen is whether a business will act in time to protect against their next phishing attack and prevent a potentially catastrophic breach.

You can access the findings of our SMB Pulse Survey here.


This article was provided by our service partner: Webroot

Patch Management Practices

Patch Management Practices to Keep Your Clients Secure

Develop a Policy of Who, What, When, Why, and How for Patching Systems

The first step in your patch management strategy is to come up with a policy around the entire patching practice. Planning in advance enables you to go from reactive to proactive—anticipating problems in advance and develop policies to handle them.

The right patch management policy will answer the who, what, when, why, and how for when you receive a notification of a critical vulnerability in a client’s software.

Create a Process for Patch Management

Now that you’ve figured out the overall patch management policy, you need to create a process on how to handle each patch as they’re released.

Your patch management policy should be explicit within your security policy, and you should consider Microsoft’s® six-step process when tailoring your own. The steps include:

Notification: You’re alerted about a new patch to eliminate a vulnerability. How you receive the notification depends on which tools you use to keep systems patched and up to date.

Assessment: Based on the patch rating and configuration of your systems, you need to decide which systems need the patch and how quickly they need to be patched to prevent an exploit.

Obtainment: Like the notification, how you receive the patch will depend on the tools you use. They could either be deployed manually or automatically based on your determined policy.

Testing: Before you deploy a patch, you need to test it on a test bed network that simulates your production network. All networks and configurations are different, and Microsoft can’t test for every combination, so you need to test and make sure all your clients’ networks can properly run the patch.

Deployment: Deployment of a patch should only be done after you’ve thoroughly tested it. Even after testing, be careful and don’t apply the patch to all your systems at once. Incrementally apply patches and test the production server after each one to make sure all applications still function properly.

Validation: This final step is often overlooked. Validating that the patch was applied is necessary so you can report on the status to your client and ensure agreed service levels are met.

Be Persistent in Applying the Best Practices

For your patch management policies and processes to be effective, you need to be persistent in applying them consistently. With new vulnerabilities and patches appearing almost daily, you need to be vigilant to keep up with all the changes.

Patch management is an ongoing practice. To ensure you’re consistently applying patches, it’s best to follow a series of repeatable, automated practices. These practices include:

  • Regular rediscovery of systems that may potentially be affected
  • Scanning those systems for vulnerabilities
  • Downloading patches and patch definition databases
  • Deploying patches to systems that need them
Take Advantage of Patching Resources

Since the release of Windows 10, updates to the operating system are on a more fluid schedule. Updates and patches are now being released as needed and not on a consistent schedule. You’ll need to let your team know when an applicable update is released to ensure the patch can be tested and deployed as soon as possible.

As the number of vulnerabilities and patches rise, you’ll need to have as much information about them as you can get. There are a few available resources we recommend to augment your patch management process and keep you informed of updates that may fall outside of the scope of Microsoft updates.

Utilize Patching Tools

You don’t want your technicians spending most of their time approving and applying patches on individual machines, especially as your business grows and you take on more clients. To take the burden off your technicians, you’ll want to utilize a tool that can automate your patch management processes. This can be accomplished with a remote monitoring and management (RMM) platform, like ConnectWise Automate®. Add-ons can be purchased to manage third-party application patching to sure up all potential vulnerabilities.

Patch management is a fundamental service provided in most managed service provider (MSP) service plans. With these best practices, you’ll be able to develop a patch management strategy to best serve your clients and their specific needs.


This article was provided by our service partner : connectwise.com

Password Constraint Research

Password Constraints and Their Unintended Security Consequences

You’re probably familiar with some of the most common requirements for creating passwords. A mix of upper and lowercase letters is a simple example. These are known as password constraints. They’re rules for how you must construct a password. If your password must be at least eight characters long, contain lower case, uppercase, numbers and symbol characters, then you have one length, and four character set constraints.

Password constraints eliminate a number of both good and bad passwords. I had never heard anyone ask “how many potential passwords, good and bad, are eliminated?” And so I began searching for the answer. The results were surprising. If you want to know the precise number of possible 8-character passwords there are if all of the character sets must be used, then the equation looks something like this.

A serious limitation of this approach is that it tells you nothing about the effects of each constraint alone or relative to other constraints. (I’m also not sure if there were supposed to be four consecutive ∑s or if the mathematician was stuttering.)

We choose to use a Monte Carlo simulation to analyze the mathematical impact of the various combinations of constraints. A Monte Carlo simulation uses a statistical analysis approach that provides a close approximation of the answer, while also providing the flexibility to quickly and easily measure the impact of each constraint and combination of constraints.

A look at minimum length limits

To start, let’s look at the impact of an eight-character length constraint alone. There are 95^8 possible combinations of 8 characters. 26 uppercase letters + 26 lowercase letters + 10 numerals + 33 symbols = 95 characters. For a length of 8 characters, we have 95˄8 possible passwords.

If a password must be at least 8 characters long, then there are also about 70.6 trillion otherwise viable passwords you are not allowed to use (95+(95^2 ) +(95^3 ) +(95^4 ) +(95^5)+(95^6 )+(95^7)). That’s a good thing. It means you can’t use 95 one character passwords, 9,025 two character passwords, and so on. Almost 70 trillion of those passwords you cannot use are seven characters long. This is a great and wholly intended effect of a password length constraint.

The problem with a lack of constraints is that people will use a very small set of all possible passwords, which invariably includes passwords that are incredibly easy to guess. In the analysis of over one million leaked passwords, it was found that 30.8 percent passwords eight to 11 characters long contained only lowercase letters, and 43.9 percent contained only lowercase letters and numbers.  In fact, to perform a primitive brute force attack against an eight-character password containing only lower case letters, it’s only necessary to try about 209 billion character combinations. That does not take a computer very long to crack. And, as we know from analyzing large numbers of passwords, it’s likely to contain one of the most popular ten thousand passwords.

To beef up security, we begin to add character constraints. But, in doing so, we decrease the number of possible passwords; both good and bad.

Just by requiring both uppercase and lowercase letters, more than 15 percent of all possible 8-character combinations have been eliminated as possible passwords. This means that 1QV5#T&|cannot be a password because there are no lowercase letters. Compared to Darnrats,which meets the constraint requirements, 1QV5#T&|is a fantastic password. But you cannot use it. Superior passwords that cannot be used are acceptable collateral damage in the battle for better security. “Corndogs” is acceptable, but “fruit&veggies” is not. This clearly is not a battle for lower cholesterol.

As constraints pile up, possibilities shrink

If a password must be exactly eight characters long and contain at least one lower case letter, at least one uppercase letter and at least one symbol, we are getting close to one-in-five combinations of 8 characters that are not allowable as passwords. Still, the effect of constraints on 12 and 16 character passwords is negligible. But that is all about to change… you can count on it.

Are you required to use a password that is at least eight characters long, has lower and uppercase letters, number and symbols? Just requiring a number to be part of a password removes over 40 percent of 8-character combinations from the pool of possible passwords. Even though you can use lowercase and uppercase letters, and you can use symbols, if one of the characters in your password must be a number then there are far fewer great passwords that you can use. If a 16 character long password must have a number, then 13 times more potential passwords have become illegal as a result of that one constraint than the combined constraints of lower and uppercase letters and symbols caused. More than one-in-four combinations of 12 characters can no longer become a passwords either.

You might have noticed that there is little effect on the longer passwords. Frequently there is also very little value in imposing constraints on long passwords. This is because each additional character in a password grows the pool of passwords exponentially. There are 6.5 million times as many combinations of 16 character pass words using only lowercase letters than there are of eight character passwords using all four character sets. That means that “toodlesmypoodles” is going to be a whole lot harder to crack than “I81B@gle”

Long and simple is better than short and hard

People tend to be very predictable. There are more symbols (than there are in any other characters set. Theoretically that means that symbols are going to do the most to make a password strong, but 80 percent of the time it is going to be one of the top five most frequently used symbols, and 95 percent of the time is will be one of the top 10 most frequently used symbols.

Analysis of two million compromised passwords showed that about one in 14 passwords start with the number one, however for those that started with the number one, 75 percent of them ended with a number as well.

The use of birthdays and names, for example, make it much easier to quickly crack many passwords.

Password strength: It’s length, not complexity that matters

As covered above, all four character sets (95 characters) in an eight character password allow for about 6.634 quadrillion different password possibilities. But a 16 character password with only lowercase letters has about 43.8 sextillion possible passwords. That means that there are well over 6.5 million times more possible passwords for 16 consecutive lowercase letters than for any combination of eight characters regardless of how complex the password is.

My great password is “cats and hippos are friends!”, but I can’t use it because of constraints – and because I just told you what it is.

For years password experts have been advocating for the use of simple passphrases over complex passwords because they are stronger and simpler to remember. I’d like to throw a bit of gasoline on to the fire and tell you, those 95^8 combinations of characters are only  half that many when you tell me I have to use uppercase, lowercase, numbers, and symbols.

———————————————————————————————————————————-