Earlier this month, CES attendees got a taste of the future with dazzling displays of toy robots, smart assistants, and various AI/VR/8K gadgetry. But amid all the remarkable tech innovations on the horizon, one thing is left off the menu: user privacy. As we anticipate the rocky road ahead, there are three major pitfalls that have privacy experts concerned.
Bio hazard
Biometric authentication—using traits like fingerprints, iris, and voice to unlock devices—will prove to be a significant threat to user privacy in 2018 and beyond. From a user’s perspective, this technology streamlines the authentication process. Convenience, after all, is the primary commodity exchanged for privacy.
Mainstream consumer adoption of biometric tech has grown leaps and bounds recently, with features such as fingerprint readers becoming a mainstay on modern smartphones. Last fall, Apple revealed its Face ID technology, causing some alarm among privacy experts. You can’t change your fingerprints, after all. Biometric access is essentially akin to using the same password across multiple accounts.
“Imagine a scenario where an attacker gains access to a database containing biometric data,” said Webroot Sr. Advanced Threat Research Analyst Eric Klonowski. “That attacker can then potentially replay the attack against a variety of other authenticators.”
That’s not to say that biometrics are dead on arrival. Privacy enthusiasts can find solace in using biometrics in situations such as a two-factor authentication supplement. And forward-thinking efforts within the tech industry, such as partnerships forged by the FIDO Alliance, can help cement authentication standards that truly protect users. For the foreseeable future, however, this new tech has the potential to introduce privacy risks, particularly when it comes to safely storing biometric data.
Big data, big breaches
2017 was kind of a big year for data breaches. Equifax, of course, reined king by exposing the personal information (including Social Security Numbers) of some 140 million people in a spectacular display of shear incompetence. The Equifax breach was so massive that it overshadowed other big-data breaches from the likes of Whole Foods, Uber, and the Republican National Committee.
It seems no one—including the government agencies we trust to guard against the most dangerous online threats—was spared the wrath of serious data leaks. Unfortunately, there is no easy remedy in sight, and the ongoing global invasion of user privacy is forcing new regulatory oversight, such as the upcoming GDPR to protect EU citizens. The accelerated growth of technology, while connecting our world in ways never thought possible, has also completely upended traditional notions surrounding privacy.
The months ahead beg the question:
Talent vacuum
The third big issue that will continue to impact privacy across the board is the current lack of young talent in the cybersecurity industry. This shortfall is a real and present danger. According to a report by Frost & Sullivan, the information security workforce will face a worldwide talent shortage of 1.5 million by 2020.
Some of this shortfall is partly to blame on HR teams that fail to fully understand what they need to look for when assessing job candidates. The reality is that the field as a whole is still relatively new and is constantly evolving. Cybersecurity leaders looking to build out diverse teams are wise to search beyond the traditional background in computer science. Webroot Vice President and CISO Gary Hayslip explained that a computer science degree is not something on his radar when recruiting top talent for his teams.
“In cyber today, it’s about having the drive to continually educate yourself on the field, technologies, threats and innovations,” said Hayslip. “It’s about being able to work in teams, manage the resources given to you, and think proactively to protect your organization and reduce the risk exposure to business operations.
Beyond shoring up recruiting practices for information security roles, organizations of all types should consider other tactics, such as providing continual education opportunities, advocating in local and online communities, and inevitably replacing some of that human talent with automation.
This article was provided by our service partner : webroot.com
Spectre, Meltdown, & the CLIMB Exploit: A Primer on Vulnerabilities, Exploits, & Payloads
In light of the publicity, panic, and lingering despair around Spectre and Meltdown, I thought this might be a good time to clear up the differences between vulnerabilities, exploits, and malware. Neither Spectre nor Meltdown are exploits or malware. They are vulnerabilities. Vulnerabilities don’t hurt people, exploits and malware do. To understand this distinction, witness the CLIMB exploit:
The Vulnerability
Frequently, when a vulnerability is exploited, the payload is malware. But the payload can be benign, or there may be no payload delivered at all. I once discovered a windows vulnerability, exploited the vulnerability, and was then able to deliver the payload. Here’s how that story goes:
It’s kind of embarrassing to admit, but one evening my wife and I went out to dinner, and upon returning, realized we had a problem. It wasn’t food poisoning. We were locked out of our house. The solution was to find a vulnerability, exploit it, and get into the house. The vulnerability I found was an insecure window on the ground floor.
With care I was able to push the window inward and sideways to open it. From the outside, I was able to bypass the clasp that should have held the window closed. Of course, the window was vulnerable for years, but nothing bad came of it. As long as nobody used (exploited) the vulnerability to gain unauthorized access to my home, there was no harm done. The vulnerability itself was not stealing things from my home. It was just there, inert. It’s not the vulnerability itself that hurts you. It’s the payload. Granted, the vulnerability is the enabler.
The window was vulnerable for years, but nothing bad happened. Nobody attacked me, and while the potential for attack was present, an attack (exploit) is not a vulnerability. The same can be true of vulnerabilities in software. Opening the window is where the exploit comes in.
The Exploit
My actual exploit occurred in two stages. First, there was proof of concept (POC). After multiple attempts, I was able to prove that the vulnerable window could be opened, even when a security device was present. Next, I needed to execute the Covert Lift Intrusion Motivated Breach (CLIMB) exploit. Yeah, that means I climbed into the open window, a neat little exploit with no coding required. I suppose I could have broken the window, but I really didn’t want to brick my own house (another vulnerability?).
The Payload
Now we come to the payload. In this case, the payload was opening the door for my wife. You see, not all payloads are malicious. If a burglar had used the CLIMB exploit, they could have delivered a much more harmful payload. They could have washed the dishes (they wouldn’t, unless they were Sheldon Cooper), they could have stolen electronic items, or they could have planted incriminating evidence. The roof is the limit.
Not all vulnerabilities are as easy to exploit as others. All of my second-floor windows had the same vulnerability, but exploiting them would have been more difficult. I am sure happy that I found the vulnerability before a criminal did. Because I was forgetful that fateful night, I’m also happy the vulnerability was there when I found it. As I said, I really didn’t want to break my own window. By the way, I “patched” my windows vulnerability by placing a wooden dowel between the window and the wall.
There you have it. Vulnerabilities, exploits, and payloads explained through the lens of the classic CLIMB exploit.
This article was provided by our service partner : Webroot
Veeam – Linux VM: A place to back up MySQL
What does it take to back up MySQL on a Linux VM? This is a riddle we sometimes hear at Veeam: When running on a Linux VM, how does one quiesce MySQL databases? Unfortunately, there are not many new ways to answer this riddle, and the answers we currently have are already tried and tested!
The answers can be found in our popular white paper Consistent protection of MySQL/MariaDB with Veeam, written by Solutions Architect Pascal Di Marco. The paper is available for download on our website and describes three different methods for backing up MySQL/MariaDB on a Linux VM. Two hot backup methods running pre- and post-snapshot scripts, and cold backup using database shutdown. This makes use of VMware tools installed on the Linux VM. It is not straightforward like a Microsoft SQL quiescence, because Linux doesn’t have a VSS mechanism like Windows does.
Described are methods of backing up MySQL/MariaDB on a Linux VM using activated scripts local to the database. VMware can run a script to act before the snapshot is created, known as the pre-freeze script and can run a script to do things after the snapshot is created, known as the post-thaw script.
Here’s a quick summary:
Option 1: Hot backup — Database online dump
The mysqldump command copies a database to storage accessible from the MySQL server, taking an online dump of each database without disrupting the MySQL service. This method lets you take a transaction consistent backup of databases but more steps are needed to perform a restore. As with Option 1, the pre-freeze-script will only run if you have the VMware tools running.
Advantage: This allows for 100% uptime; the MySQL service does not stop and the dumped databases are in a transaction-consistent state.
Disadvantage: Depending on the size of your databases, the process may take a considerable amount of time to achieve. A second copy of the database means extra storage space is required to maintain it.
Option 2: Hot backup — Database freezing
Stop the MySQL service for a few moments while the snapshot is created, then start it again. The post-thaw script will not run until the snapshot is created. The pre-freeze script and post-thaw script will only be able to run if you have the VMware tools running in your MySQL server.
Advantage: This is quick and simple, allowing you to take a transaction of all databases with no additional disk usage local to the MySQL server.
Disadvantage: Databases running on the MySQL server will briefly be unavailable, and applications that need 100% uptime may not find this suitable.
Option 3: Cold Backup — Database shutdown
In this method, the application service will be stopped during snapshot creation and restarted once the VM snapshot has been created. It requires permission to start and stop application services but does not require MySQL user permissions. You can authenticate by either using the MySQL default configuration file or hardcoding the username and password in the script.
Advantage: This is easy to set up and doesn’t take extra space. It provides a short RTO, since no further action is required aside from booting the restored guest.
Disadvantage: The databases will be totally unavailable while the guest snapshot is created.
Recovery
Guest recovery: The cold backup and freeze method will leave the database consistent and able to start up without additional operation, so restoring the VM from the backup files is the only operation to perform. The guest recovery may benefit from Veeam’s Instant VM Recovery feature, which lets you boot up the guest directly from the Veeam Backup Repository in minutes.
Additional dump restoration: The extra task of injecting the dump file into the database using file redirection is necessary if the following is true: The issue is not limited to a database outage, the entire VM must be recovered from the Veeam Backup file and the database dump method has been used.
Veeam U-AIR database restoration: Whether it is a granular or a full database restoration, Veeam U-AIR wizard can be used in conjunction with any relevant database management tool such as MySQL Workbench to recover a database item.
Microsoft Releases More Patches for Meltdown & Spectre
Microsoft informed users on Tuesday that it released additional patches for the CPU vulnerabilities known as Meltdown and Spectre, and removed antivirus compatibility checks in Windows 10.
Meltdown and Spectre allow malicious applications to bypass memory isolation and access sensitive data. Meltdown attacks are possible due to CVE-2017-5754, while Spectre attacks are possible due to CVE-2017-5753 (Variant 1) and CVE-2017-5715 (Variant 2). Meltdown and Spectre Variant 1 can be resolved with software updates, but Spectre Variant 2 requires microcode patches.
In addition to software mitigations, Microsoft recently started providing microcode patches as well. It initially delivered Intel’s microcode updates to devices running Windows 10 Fall Creators Update and Windows Server 2016 (1709) with Skylake processors.
Now that Intel has developed and tested patches for many of its products, Microsoft has also expanded the list of processors covered by its Windows 10 and Windows Server 2016 updates. Devices with Skylake, Coffee Lake and Kaby Lake CPUs can now receive the microcode updates from Intel via the Microsoft Update Catalog.
Microsoft also informed customers on Tuesday that software patches for the Meltdown vulnerability are now available for x86 editions of Windows 7 and Windows 8.1.
The company has also decided to remove the antivirus compatibility checks in Windows 10. The decision to introduce these checks came after the tech giant noticed that some security products had created compatibility issues with the Meltdown patches. This resulted in users not receiving security updates unless their AV vendor made some changes.
Microsoft has determined that this is no longer an issue on Windows 10 so the checks have been removed. On other versions of the operating system, users will still not receive updates if their antivirus is incompatible.
Microsoft’s Patch Tuesday updates for March 2018 fix over 70 flaws, including more than a dozen critical bugs affecting the company’s Edge and Internet Explorer web browsers.
Getting started with Veeam Explorer for Microsoft SQL Server
Believe it or not, I used to work a lot with Microsoft SQL Server. While I did not call myself a database administrator (DBA), I did know my way around a database or two. Since I’ve been at Veeam, I have always enjoyed telling the Veeam story around using SQL Server as a critical application that needs the best Availability options.
That’s why I took particular interest in Veeam Explorer for Microsoft SQL Server that came in Veeam Backup & Replication. Veeam Explorer for Microsoft SQL Server allows application-specific restores of SQL databases, and also contents of tables, objects such as stored procedures, views and more. Additionally, you can also restore the databases to a specific transaction.
This is a great combination of functionality from the established application-aware image processing with a dedicated tool for database restores in Veeam Explorer for Microsoft SQL Server. Additionally, Veeam Backup & Replication and the Veeam Agent for Microsoft Windows also provide an image backup of the entire system.
For those who are not a DBA, sometimes dealing with low-level SQL Server topics can be a bit overwhelming. To help this process, I created a few scripts to help individuals learn this type of interaction with SQL Server. I put three (and a deleted script) up on the Veeam Github site. To use this script, only an S:\ drive is needed (the path can be changed) to create the sample database and put in a SQL Server Agent job to automatically run a few stored procedures that will insert and delete random data. This creates a database called SQLGREENDREAM.
After running the three scripts to create the database, implement the random number function and set the schedule to create the random data (2 records) and delete 1 record. The SQL Server Transaction Log Backup will show the new database being backed up after the next incremental backup:
Once the interval of the SQL Server Agent job runs (12 minutes in the GitHub script) and the Veeam Backup Job interval passes, the most selective restore point option can be selected in Veeam Explorer for Microsoft SQL Server. This selective option, to restore to a specific transaction, is shown in the figure below:
Once the interval of the SQL Server Agent job runs (12 minutes in the GitHub script) and the Veeam Backup copy interval process through a time when the test data has been run, the restore to a specific transaction option can be visible to the controlled scripting for the SQLGREENDREAM database in the GitHub repository. Then you can see the records in question being just as scripted, 2 records added then one record deleted. Those entries are done by the SQL Server Agent:
From there, the restores can be done with confidence to see how the SQL databases are restored with Veeam. With the sample scripts in the GitHub repository, one can become more comfortable with these restore situations when venturing out of normal comfort zones! If you are using Veeam Backup Free Edition and the SQL Server is a VM being backed up, you can still use Veeam Explorer for Microsoft SQL Server to restore the database to the time of the image-based backup; just no transaction rollback. You can use the NFR program for a fully functional installation also.
This article was reposted from Veeam.com
Best practices from Veeam support on using tape
When speaking about backup storage strategies, Veeam’s recommendation is to keep the initial backup chain short (7 – 14 points) and use general purpose disk that will allow you to recover data in the shortest amount of time. The long-term retention should come from secondary and tertiary storage, which typically boasts a much lower storage cost per TB, but at a trade-off, the RTO when restoring from such storage can take much longer time. Here’s the graphics, which illustrates this scenario:
Additionally, with many new features of Veeam, the tape support now includes putting vSphere, Hyper-V, and Veeam Agents for Microsoft Windows and Linux backups on tape.
One of the most popular choices for backup archival is tape. It is cheap, reliable and offers protection against crypto viruses and hacker attacks. Additionally, it’s offline when not in a tape loader.
With Veeam, IT administrators can use flexible options to create copies of backups and store them on a different media, following the 3-2-1 Rule for the backup and disaster recovery. This blog post provides advice and considerations that will help you create a robust tape archival infrastructure.
How to deploy a tape library and use it with Veeam
When planning and implementing your deployment project, follow the recommendations below:
You will need a tape server that will perform most data transfer tasks during archiving to tape. Check the following prerequisites:
One thing to also consider is to use GFS media pools with the tape support in Veeam. This feature allows longer-term retention to be set easily for tape backups as shown in the picture below:
If you plan to perform file-to-tape archiving for a large number of files (more than 500,000 per job), consider using any commercial edition of SQL Server for Veeam configuration database to support these operations. Configuration database stores information about all files backed up by Veeam Backup & Replication, and using SQL Server Express edition (with its 10 GB limit for a database size) may lead to significant performance degradation. If database size reaches 10 GB, all Veeam operations will stop.
To load or get the tapes from the library, use the import-export slots. If you need to perform these operations manually, remember to stop tape jobs, stop tape server, perform manual operation, start server, rescan or run inventory for the library (to recognize the uploaded tapes) and then restart the tape job.
What to consider before starting the upgrade
If you are upgrading your Veeam deployment, then you should first upgrade the Veeam backup server.
The tape server will be upgraded after that, using the automated steps of the Upgrade wizard that opens after the first launch of Veeam Backup & Replication console. However, you can choose to upgrade it manually by starting the Upgrade wizard at any time from the main Veeam Backup & Replication menu.
If you are upgrading your tape library, consider the following:
What to consider when planning for tape jobs
Before you start configuring Veeam jobs for tape archiving, consider the following factors:
After these considerations, it is recommended that you double your estimated number of tapes when planning for the resources.
Conclusion
In this blog post, we’ve talked mainly about tape infrastructure. We recognize that when setting up tape jobs, the learning curve can be quite steep. Instead of explaining all the concepts, we chose a different approach. We’ve prepared a list of settings and a well-defined result that will be achieved. You can choose to use them as they are or as a basis for your personal setup. Check out this Secondary Copy Best Practices Veeam guide for more details.
This article was provided by Veeam.com
Disaster Recovery Planning
It seems like it’s almost every day that the news reports another major company outage and as a result, the massive operational, financial and reputational consequences experienced, both short- and long-term. Widespread systemic outages first come to mind when considering disasters and threats to business and IT service continuity. But oftentimes, it’s the overlooked, “smaller” threats that regularly occur. Human error, equipment failure, power outages, malicious threats and data corruption can too bring an entire organization to a complete standstill.
It’s hard to imagine that these organizations suffering the effects of an outage don’t have a disaster recovery plan in place — they most certainly do. But, why do we hear of and experience failure so often?
Challenges with disaster recovery planning
Documenting
At the heart of any successful disaster recovery plan is comprehensive, up-to-date documentation. But with digital transformation placing more reliance on IT, environments are growing larger and more complex, with constant configuration changes. Manually capturing and documenting every facet of IT critical to business continuity is neither efficient or scalable, sending to us our first downfall.
Testing
Frequent, full-scale testing is also critical to the success of a thorough disaster recovery plan, again considering the aforementioned scale and complexity of modern environments — especially those that are multi-site. Paired with the resources required and potential end-user impact of regular testing, the disaster recovery plan’s viability is often untested.
Executing
The likelihood of a successful failover — planned or unplanned — is slim if the disaster recovery plan continues to be underinvested in, becoming out-of-date and untested quickly. Mismatched dependencies, uncaptured changes, improper processes, unverified services and applications and incorrect startup sequences are among the many difficulties when committing to a failover, whether it’s a single application or the entire data center.
Compliance
While it is the effects of an IT outage that first come to mind when considering disaster recovery, one aspect tends to be overlooked — compliance.
Disaster recovery has massive compliance implications — laws, regulations and standards set in place to ensure an organization’s responsibility to the reliability, integrity and Availability of its data. While what constitutes compliance varies from industry to industry, one thing holds true — non-compliance is not an option and brings with it significant financial and reputational risks.
Meltdown & Spectre: Where Are We at Now?
Meltdown and Spectre still continue to dominate the security news and the more we delve into it, we are starting to understand the depth and breadth of what this now means for the future of the security landscape.
Turns out the three variants of side-channel attacks, Meltdown and two different for Spectre, were discovered back in June of last year [2017] by researchers using speculative execution, which is where processors execute on code and then fetch and store the speculative results in cache. It’s a technique used to optimize and improve the performance of a device. What is important to note with Spectre is that it puts users at risk for information disclosure by exposing the weakness in the architecture of most processors in the market, and the breadth is vast: Intel, AMD, ARM, IBM (Power, Mainframe Z series) and Fujitsu/Oracle SPARC implementations across PCs, physical and virtual servers, smartphones, tablets, networking equipment and possibly IoT devices.
Currently there are no reported exploits in the wild.
Of the two, Meltdown is the easier one to mitigate with operating system updates. AMD processors are not affected by Meltdown. Spectre is a bit more complex to resolve because it is a new class of attack. The two variants of Spectre both can potentially do harm like stealing logins and other user data residing on the affected device. Intel, ARM, and AMD processors are affected by Spectre. Recently, Microsoft released another emergency update to disable Intel’s microcode fix. This original update was meant to patch for variant 2 of Spectre. Unfortunately, that update had adverse effects as there were numerous reports of reboots and instability, so Microsoft issued an out of band update to disable.
Things are still evolving around Spectre and while operating system updates and browser updates are helping to patch for Spectre, it is being reported by some sources that a true fix may be an update to the hardware (processor) itself.
The following is a chart* to clarify each vulnerability:
*Chart is courtesy of SANS/Rendition Infosec. See full presentation here.
It will be important over the next few weeks to stay on top of any breaking news around Meltdown and Spectre. Mitigation efforts should be underway in your IT organization to prevent a future zero-day attack.
This article was provided by our service partner : Connectwise
Security : 3 Pitfalls Facing Privacy in 2018
Earlier this month, CES attendees got a taste of the future with dazzling displays of toy robots, smart assistants, and various AI/VR/8K gadgetry. But amid all the remarkable tech innovations on the horizon, one thing is left off the menu: user privacy. As we anticipate the rocky road ahead, there are three major pitfalls that have privacy experts concerned.
Bio hazard
Biometric authentication—using traits like fingerprints, iris, and voice to unlock devices—will prove to be a significant threat to user privacy in 2018 and beyond. From a user’s perspective, this technology streamlines the authentication process. Convenience, after all, is the primary commodity exchanged for privacy.
Mainstream consumer adoption of biometric tech has grown leaps and bounds recently, with features such as fingerprint readers becoming a mainstay on modern smartphones. Last fall, Apple revealed its Face ID technology, causing some alarm among privacy experts. A key risk in biometric authentication lies in its potential as a single method for accessing multiple devices or facilities. You can’t change your fingerprints, after all. Biometric access is essentially akin to using the same password across multiple accounts.
“Imagine a scenario where an attacker gains access to a database containing biometric data,” said Webroot Sr. Advanced Threat Research Analyst Eric Klonowski. “That attacker can then potentially replay the attack against a variety of other authenticators.”
That’s not to say that biometrics are dead on arrival. Privacy enthusiasts can find solace in using biometrics in situations such as a two-factor authentication supplement. And forward-thinking efforts within the tech industry, such as partnerships forged by the FIDO Alliance, can help cement authentication standards that truly protect users. For the foreseeable future, however, this new tech has the potential to introduce privacy risks, particularly when it comes to safely storing biometric data.
Big data, big breaches
2017 was kind of a big year for data breaches. Equifax, of course, reined king by exposing the personal information (including Social Security Numbers) of some 140 million people in a spectacular display of shear incompetence. The Equifax breach was so massive that it overshadowed other big-data breaches from the likes of Whole Foods, Uber, and the Republican National Committee.
It seems no one—including the government agencies we trust to guard against the most dangerous online threats—was spared the wrath of serious data leaks. Unfortunately, there is no easy remedy in sight, and the ongoing global invasion of user privacy is forcing new regulatory oversight, such as the upcoming GDPR to protect EU citizens. The accelerated growth of technology, while connecting our world in ways never thought possible, has also completely upended traditional notions surrounding privacy.
The months ahead beg the question: What magnitude of breach will it take to trigger a sea change in our collective expectation of privacy?
Talent vacuum
The third big issue that will continue to impact privacy across the board is the current lack of young talent in the cybersecurity industry. This shortfall is a real and present danger. According to a report by Frost & Sullivan, the information security workforce will face a worldwide talent shortage of 1.5 million by 2020.
Some of this shortfall is partly to blame on HR teams that fail to fully understand what they need to look for when assessing job candidates. The reality is that the field as a whole is still relatively new and is constantly evolving. Cybersecurity leaders looking to build out diverse teams are wise to search beyond the traditional background in computer science. Webroot Vice President and CISO Gary Hayslip explained that a computer science degree is not something on his radar when recruiting top talent for his teams.
“In cyber today, it’s about having the drive to continually educate yourself on the field, technologies, threats and innovations,” said Hayslip. “It’s about being able to work in teams, manage the resources given to you, and think proactively to protect your organization and reduce the risk exposure to business operations.
Beyond shoring up recruiting practices for information security roles, organizations of all types should consider other tactics, such as providing continual education opportunities, advocating in local and online communities, and inevitably replacing some of that human talent with automation.
This article was provided by our service partner : webroot.com
Internet Security : How to Avoid Phishing on Social Media
From Facebook to LinkedIn, social media is flat-out rife with phishing attacks. You’ve probably encountered one before… Do fake Oakley sunglasses sales ring a bell?
Phishing attacks attempt to steal your most private information, posing major risks to your online safety. It’s more pressing than ever to have a trained eye to spot and avoid even the most cunning phishing attacks on social media.
Troubled waters
Spammers on social media are masters of their craft and their tactics are demonstrably more effective than their email-based counterparts. According to a report by ZeroFOX, up to 66 percent of spear phishing attacks on social media sites are opened by their targets. This compares to a roughly 30 percent success rate of spear phishing emails, based on findings by Verizon.
Facebook has warned of cybercriminals targeting personal accounts in order to steal information that can be used to launch more effective spear phishing attacks. The social network is taking steps to protect users’ accounts from hostile data collection, including more customizable security and privacy features such as two-factor authentication. Facebook has also been more active in encouraging users to adopt these enhanced security features, as seen in the in-app message below.
Types of social phishing attacks
Fake customer support accounts
The rise of social media has changed the way customers seek support from brands, with many people turning to Twitter or Facebook over traditional customer support channels. Scammers are taking advantage of this by impersonating the support accounts of major brands such as Amazon, PayPal, and Samsung. This tactic, dubbed ‘angler phishing’ for its deepened deception, is rather prevalent. A 2016 study by Proofpoint found that 19% of social media accountsappearing to represent top brands were fake.
To avoid angler phishing, watch out for slight misspellings or variations in account handles. For example, the Twitter handle @Amazon_Help might be used to impersonate the real support account @AmazonHelp. Also, the blue checkmark badges next to account names on Twitter, Facebook, and Instagram let you know those accounts are verified as being authentic.
Spambot comments
Trending content such as Facebook Live streams are often plagued with spammy comments from accounts that are typically part of an intricate botnet. These spam comments contain URLs that link to phishing sites that try to trick you into entering your personal information, such as a username and password to an online account.
It is best to avoid clicking any links on social media from accounts you are unfamiliar with or otherwise can’t trust. You can also take advantage of security software features such as real-time anti-phishing to automatically block fake sites if you accidently visit them.
Dangerous DMs
Yes, phishing happens within Direct Messages, too. This is often seen from the accounts of friends or family that might be compromised. Hacked social media accounts can be used to send phishing links through direct messages, gaming trust and familiarity to fool you. These phishing attacks trick you into visiting malicious websites or downloading file attachments.
For example, a friend’s Twitter account that has been compromised might send you a direct message with a fake link to connect with them on LinkedIn. This link could direct to a phishing site like the one below in order to trick you into giving up your LinkedIn login.
While this site may appear to look like the real LinkedIn sign-on page, the site URL in the browser address bar reveals it is indeed a fake phishing site.
Phony promotions & contests
Fraudsters are also known to impersonate brands on social media in order to advertise nonexistent promotions. Oftentimes, these phishing attacks will coerce victims into giving up their private information in order to redeem some type of discount or enter a contest. Know the common signs of these scams such as low follower counts, poor grammar and spelling, or a form asking you to give up personal information or make a purchase.
The best way to make sure you are interacting with a brand’s official page on social media is to navigate to their social pages directly from the company’s website. This way you can verify the account is legitimate and you can follow the page from there.
Internet Security : Why is ransomware still so successful?
There’s no end to ransomware in sight. It’s a simple enough attack — install malware, encrypt data/system, and ask for the ransom — so why aren’t we stopping ransomware? Security vendors are keenly aware of the issue, as well as the attack vectors and methods, but can’t seem to stay a step ahead, causing ransomware to grow form $1 billion in damages in 2016 to an estimated $5 billion in 2017. There are two basic reasons ransomware continues to be a “success” for cyber criminals.
Reason 1: Malware authors are getting better at their craft
Just when we think we’re getting on top of the ransomware problem, our adversaries alter their tactics or produce new techniques to replicate and cause damage and misery. We’ve recently seen ransomware like WannaCry take advantage of unpatched vulnerabilities in the Windows SMB service to propagate around networks, especially those that had SMB open to the internet — A clever technique borrowed from mid-to-late 90s Windows worm malware like Sasser. We’ve also seen malware writers develop new techniques for installing malicious code onto computers via Microsoft Office. While the threat posed by malicious macros in Office documents has existed for a number of years, we’re now seeing the use of a Microsoft protocol called Dynamic Data Exchange (DDE) to run malicious code. Unlike macro-based attacks, the DDE attack doesn’t give the user a pop-up, prompt or warning, so exploitation is far more effective and successful.
The technological advances made by malware authors are significant, but their soft skills, like social engineering, also keep on getting better. Improved writing, more realistic email presentation, and even solid social engineering tactics are all cause for the increase in their success.
And if you’re good at what you do, make it a service and profit on those that have a similar interest, but lack your skills. Thus, “crime-as-a-service” and “malware-as-a-service” now exist, further perpetuating the ransomware problem. The availability and ease of use of these platforms, means anyone can turn to cybercrime and ransomware with little or no coding or malware experience. These platforms and networks are run by organized cybercrime gangs, for vast profits, so we won’t see them going away any time soon,
Reason 2: We’re causing our own problems
Of course, there’s still one large problem many of us have not dealt with yet, and that’s the weaknesses we ourselves cause that become the entry way for the cybercriminals. WannaCry was so successful because it leveraged an unpatched windows vulnerability. NotPetya did the same. So, what are the weaknesses?
Doing something about the ransomware problem
What should you do to stop ransomware being so successful? Hide? Run away? Unplug the internet? Probably none of those ideas are likely to solve this problem, although out of sight and all that. I mentioned briefly above, the idea of many thin layers of defense, and while ‘defense in depth’ might seem a little old school and became extinct when we lost control of the network perimeter, there are some ideas we can borrow:
Get these three things right, and you’ll be a lot closer to stopping the rain of ransomware from ruining your day, night or weekend.
This article was provided by our service partner : Veeam.com