802.11AC Wave2

When 802.11AC was introduced, we found it to be the most amazing thing since sliced bread.  It was dramatically faster than 802.11N, backwards compatible, and mostly interference free.  Today, we will discuss what 802.11AC-Wave2 brings to the table and how we feel about it.

802.11AC-Wave2 – Features and Enhancements

  • Supports speeds up to 2.34 Gbps (more spatial streams) 

Pros:  Compared to 802.11AC-Wave1, which has a capacity of  1.3 Gbps, the new standard has much higher capacity for throughput.  This is due to an increase in spatial streams.  With 802.11AC-Wave2, we increase from 3 spatial streams to 4.  This translates to a 33% increase in throughput.

Cons:  Similar to 802.11AC-Wave1, most client devices implement only one or two spatial streams in order to save on power and space required for additional antennas.  Addtionally, upgrades to current network switching infrastructure is required to fully take advantage of the ~2Gbps throughput.  A high signal to noise ratio and line of sight is usually also required.

  • Supports multiuser multiple input, multiple output (MU-MIMO)

Pros:  First, we must understand that with each additional spatial stream, we gain additional throughput.  Unfortunately, 802.11AC-Wave1 spatial streams are transmitted over multiple antennas to only ONE client at a time. Let’s imagine a freeway with multiple lanes, which can accommodate all sorts of cars, even the “wide load” ones that carry mobile homes and big construction equipment.  Now, imagine only 1 car can move at a time on this freeway.  Most cars (tablets, phones, etc) only require the use of a single lane, yet they have all the lanes available to them.  As you can see, this isn’t very efficient because the other lanes essentially goes to waste.  MU-MIMO fixes this. It allows each stream (lanes) to be directed to a different one-stream client simultaneously. So potentially, three clients get serviced in the time it previously took to service one.  Qualcomm claims a 2x-2.5x performance improvement.

Cons:  Unfortunately, this feature is only available in the downstream.  Using the car analogy, imagine the efficient freeway is only available for northbound, not southbound.  This also is a fairly complex and new technology, stability in a real-world environment has not be verified yet.

  • Offers the option of using 160-MHz-wide channels for greater performance

Pros:  Channel bonding is the single biggest performance multiplier, and it is the foundation for vendors’ claims of 1.3 Gbps speeds for Wave 1, and from 2.3 Gbps for Wave 2 up to 6.7 Gbps.  To accomplish this, 802.11AC-Wave1 allows FOUR 20 MHz channels to be bonded into a single 80 MHz channel. 802.11AC-Wave2 builds on this to provide up to 160 MHz (contiguous channels and a non-contiguous 80 + 80 configuration).

Cons:  With the availability of just a SINGLE contiguous 160MHz channel, this capability is more useful in a point-to-point configuration than a corporate wireless network.  Corporate networks require a dense configuration, which would cause performance degrading co-channel intereference.  In practice, this means that when a nearby cell is using the channel, it makes the channel busy for other nearby cells on the same channel. Additionally, nearby does not only mean neighboring cells, but due to the nature of the Wi-Fi channel access method CSMA/CS, it also means that cells at a 1-3 cell distance may keep the channel reserved.

So, with everything said and done, what is NetCal’s stance on 802.11AC-Wave2?  To keep things simple again, we recommend upgrading to 802.11AC-Wave1 90% of the time to save money on client device, equipment and infrastructure upgrades costs.  As you can guess from the above information, the improvements can only be seen in very limited circumstances (backhauling, standalone, close range, line of sight, mesh nodes, point-to-point/point-to-multipoint bridges).

The Rise in Crypto Ransomware

In recent years, we have seen a significant growth in Malware.  With enablers such as Bitcoin, RSA 2048-bit encryption, and the TOR network, NetCal predicts there will continue to be a significant rise in Crypto Ransomware.  The use of these malicious applications are morphing as we speak.  Originally, they were to gain access to computers and steal data (ie spying/snooping).  Then it was for ad clicks from popups.  Now, malware has taken on the purpose of extorting money directly from the users themselves.  Although this shouldn’t be a surprise to anyone, the tools mentioned above makes it a lot easier to achieve success.

Most Crypto Ransomware use the following tactics:

  1. Use Social Engineering to invoke a user to run an application script
  2. Avoiding detection
    1. Encrypting/Encoding it’s payload (e.g. Base-64)
    2. Using Domain Generation Algorithm (DGA)
    3. Use Tor network
    4. Use Bitcoin and a money laundering network
  3. Use the Registry to reinfect after reboot
    1. 0x06 and 0x08 byte subkey (hidden using regedit)
  4. Disable System Restore or VSS type services
  5. Encrypt all user created files by extension, shares, or folders
  6. Use an existing OR 0-Day exploit/vulnerability
    1. Hijack CLSIDs
      For example, {AB8902B4-09CA-4BB6-B78D-A8F59079A8D5} causes any file in the LocalServer32 subkey to be run any time a folder is opened. By hijacking this CLSID, Poweliks is able to ensure that its registry entry will be launched any time a folder is opened or new thumbnails are created, even if the Watchdog process has been terminated.

10 Prevention Tips:

  1. Back-up your data
  2. Patch and keep software up to date
  3. Run a reputable AV solution (Webroot, Eset, etc)
  4. User Training
  5. Filter executable attachments at the email gateway
  6. Disable files running from AppData/LocalAppData folders (Group Policies)
  7. Do not give users Local Admin privileges
  8. Limited end user access to mapped drives
  9. Use a popup blocker
  10. Show hidden file-extensions

OpenDNS_security

Windows 10 – To Upgrade or Not to Upgrade

Since last year, we’ve been telling our clients to hold off on upgrading.  We even used Group Policy and our Management Agents to disable the upgrade patch.  It’s been a long and treacherous journey, but we finally believe Windows 10 is ready for Prime Time.  We’ve even seen it increase performance in some older machines.  We are now recommending our clients to upgrade to Windows 10 to take advantage of the free licensing and extended support for the OS.  With all the major bugs fixed, we’re confident you will find it to be stable and useful.  Applications are also compatible more often than not.  In fact, all of NetCal’s employees are now on Windows 10.  We did all the testing so our clients don’t have to worry.

Contact us so we can evaluate your environment.

 

Q: If I upgrade, can I use Windows 7/8/8.1 again?

A: You can always reinstall using existing media or downgrade using the built-in Windows 10 recovery process (only works for 1 month after upgrade).

Q: What if I don’t upgrade in time?  How much would a Windows 10 license cost then?

A: Although Microsoft has been rather vague thus far, the general consensus would be that the license would cost $120 for Win10Home and $200 for Win10Pro.

Q: How would I upgrade after the expiration date?

A: For those that fail to upgrade in time or simply chose not to, Windows 10 can be purchased via the Microsoft Store or through Retail Partners.

Q: If I need to reinstall Windows 10, what key can I use?

A: All Windows 7 and Windows 8/8.1 keys will work with the latest Windows 10 installation media.

Q: If I upgrade, will I be charged a subscription service fee after that?

A: According to Microsoft, if you upgrade before July 29th, Windows 10 will continue to be free and supported for the rest of the life of the device.  This is also similar to how your OEM Windows licenses work.

veeam

Save backup storage using Veeam Backup & Replication BitLooker

Introduction

When you need to back up large amounts of data, you want to use up as little disk space as possible in order to minimize backup storage costs. However, with host-based image-level backups, traditional technologies force you to back up the entire virtual machine (VM) image, which presents multiple challenges that were never problems for classic agent-based backups.

For example, during backup analysis using Veeam ONE, you might notice that some VM backups are larger than the actual disk space usage in guest OS, resulting in higher-than-planned backup repository consumption. Most commonly, this phenomenon can be observed with file servers or other systems where a lot of data is deleted without being replaced with new data.

Another big sink for repository disk space consumption is useless files. While you might not need to back up data stored in certain files or directories in the first place, image-level backups force you to do this.

“Deleted” does not necessarily mean actually deleted

It is widely known that in vast majority of most modern file systems deleted files do not disappear from the hard drive completely. The file will only be flagged as deleted in the file allocation table (FAT) of the file system (e.g., the master file table (MFT) in case of NTFS). However, the file’s data will continue to exist on the hard drive until it is overwritten by a new file. This is exactly what makes tools like Undelete even possible. In order to reset the content of those blocks, you have to use tools like SDelete by Windows Sysinternals. This tool effectively overwrites the content of blocks belonging to deleted files with zeroes. Most backup solutions will then dedupe and/or compress these zeroed blocks so they do not take any extra disk space in the backup. However, running SDelete periodically on all your VMs is time consuming and hardly doable when you have hundreds of VMs, so most users simply don’t do this and allow blocks belonging to the deleted files to remain in the backup.

Another drawback of using SDelete is that it will inflate thin-provisioned virtual disks and will require you to use technologies such as VMware Storage vMotion to deflate them after SDelete processing. See VMware KB 2004155 for more information.

Finally, these tools must be used with caution. Because SDelete creates a very big zeroed file, you have to be careful not to affect other production applications on the processed server because that file is temporarily consuming all available free disk space on the volume.

Not backing up useless files in the first place

It goes without saying that there are certain files and directories that you don’t want to back up at all (e.g., application logs, application caches, temporary export files or user directories with personal files). There also might be data protection regulations in place that actually require you to exclude specific objects from backup. However, until today, the only way for most VM backup solutions to filter out useless data was to manually move useless data on every VM to the dedicated virtual drives (VMDK/VHDX) and exclude those virtual drives from processing. Again, because it’s simply not feasible to maintain this approach in large environments with dozens of new VMs appearing daily, most users simply accepted the need to back up useless data with image-based backups as a fact of life.

Meet Veeam BitLooker

Veeam BitLooker is the patent-pending data reduction technology from Veeam that allows the efficient and fully automated exclusion of deleted file blocks and useless files, thus enabling you to save considerable amount of backup storage and network bandwidth and further reduce costs.

The first part of BitLooker was introduced in Veeam Backup & Replication back a few years ago and enabled the exclusion of the swap file blocks from processing. Considering that each VM creates a swap file, which is usually at least 2 GB in size and changes daily, this is a considerable amount of data that noticeably affects full and incremental backup size. However, BitLooker automatically detects the swap file location and determines the blocks backing it in the corresponding VMDK. These blocks are then automatically excluded from processing, replaced with zeroed blocks in the target image and are not stored in a backup file or transferred to a replica image. The resulting savings are easy to see!

BitLooker in v9

In Veeam Backup & Replication v9, BitLooker’s capabilities have extended considerably in order to further improve data reduction ratios. In Veeam Backup & Replication v9, BitLooker has now three distinct capabilities:

  • Excluding swap and hibernation files blocks
  • Excluding deleted files blocks
  • Excluding user-specified files and folders

In v9, BitLooker supports NTFS-formatted volumes only. Most of BitLooker is available right in the Veeam Backup & Replication Standard edition. However, excluding user-specified files and folders requires at least Enterprise edition.

Configuring BitLooker

There are a few options for controlling BitLooker in v9. You can find the first two in the advanced settings of each backup and replication job.

Note that the option to exclude swap file blocks was available in previous product versions, but it was enhanced in v9 to also exclude hibernation files.

Now, there is the new option that enables the exclusion of deleted file blocks:

Users upgrading from previous versions will note that by default, deleted file blocks exclusion remains disabled for existing jobs after upgrading so it doesn’t not alter their existing behavior. You can enable it manually for individual jobs or automatically for all existing jobs with this PowerShell script.

In most cases, you should only expect to see minor backup file size reduction after enabling deleted file blocks exclusion. This is because in the majority of server workloads, data is never simply deleted, but rather always overwritten with new data. More often than not, it is replaced with more data than what was deleted, which is the very reason the world’s data almost doubles every 2 years. However, in certain scenarios (such as those involving data migrations), the gains can be quite dramatic.

Finally, in v9, BitLooker also allows you to configure the exclusion of specific files and folders for each backup job. Unlike previous options, this functionality is a part of the application-aware guest processing logic, and exclusions can only be performed on a running VM. Correspondingly, you can find the file exclusion settings in the advanced settings of guest processing step of the job wizard. You have the option to either exclude specific file system objects or, conversely, back up nothing but specific objects:

When using this functionality, keep in mind that it increases both VM processing time and memory consumption by the data mover, depending on the amount of excluded files. For example, if processing exclusions for 10,000 files takes less than 10 seconds and requires just 50MB of extra RAM, then excluding 100,000 files takes 2 minutes and requires almost 400MB of extra RAM.

Summary

Veeam BitLooker offers users the possibility to further reduce backup storage and network bandwidth consumption without incurring additional costs. Enabling this functionality takes just a few clicks, and the data reduction benefits can be enjoyed in the immediate backup or replication job run.

What results are you seeing after enabling BitLooker in v9? Please share your numbers in the comments!

 

 

Re-posted from : https://www.veeam.com/blog/save-backup-storage-using-veeam-backup-replication-bitlooker.html

Microsoft on Upcoming SQL Server 2016; Goes After Oracle

Data professionals might have been expecting a launch date for SQL Server 2016 at the Data Driven event held today in New York City, but what they got was a recap of the flagship database system’s capabilities and a full-out assault on rival Oracle Corp.

Exec Judson Althoff detailed a SQL Server 2016/Oracle comparison involving a scenario where various capabilities built into SQL Server 2016 were matched up against the Oracle database. “When we say everything’s built in, everything’s built in,” he said. When the built-in capabilities were pitted against similar functionality offered by Oracle products, “Oracle is nearly 12 times more expensive,” he said.

That specific scenario was envisioned with a project starting from scratch. Althoff said not everybody does that, as they have invested in “other technologies.”

Free Licenses for Oracle Switchers
“So if you are willing to migrate off of Oracle, we will actually give you free SQL Server licenses to do so,” Althoff said in his presentation. “For every instance of Oracle you have, free SQL Server licenses. All you have to do is have a Software Assurance agreement with Microsoft. If you’re willing to take this journey with us before the end of June, we’ll actually help and invest in the migration costs, put engineers on the ground to help you migrate off of Oracle.”

 He noted that in the wake of some newspaper ads about the offer, he received e-mails asking just who was eligible. “Everyone is eligible for this,” Althoff said. “We’re super excited to help you migrate off of Oracle technology, lower your overall data processing costs and actually really be enabled and empowered to build the data estate that we’ve been talking about.”

More details on the offer were unveiled in a ” Break free from Oracle ” page on the Microsoft site. “This offer includes support services to kick-start your migration, and access to our SQL Server Essentials for the Oracle Database Administrator training,” the site says. “Dive into key features of SQL Server through hands-on labs and instructor-led demos, and learn how to deploy your applications — on-premises or in the cloud.”

Microsoft also went after Oracle on the security front, citing information published by the National Institute of Standards and Technology that lists databases and their vulnerabilities. On average, over the past few years, exec Joseph Sirosh said in his presentation, SQL Server was found to have 1/10th the vulnerabilities of Oracle.

Always Encrypted
Sirosh also highlighted new security capabilities of SQL Server 2016. “In SQL Server 2016, for the first time, you will hear about a capability that we call Always Encrypted,” he said. “This is about securing data all the way from the client, into the database and keeping it secure even when query processing is being done. At the database site, the data is never decrypted, even in memory, and you can still do queries over it.”

He explained that data is encrypted at the client, and sent to the database in its encrypted form, in which it remains even during query processing. No one can decrypt credit card data, for example, while it’s in the database, not even a DBA. “That’s what you want,” Sirosh said of the functionality enabled by homomorphic encryption.

During today’s event, Microsoft CEO Satya Nadella and other presenters focused on a series of customer success videos and live presentations, reflecting Nadella’s belief that Microsoft “shouldn’t have launch events, but customer success events.”

Those success stories leveraged new ground-breaking capabilities of SQL Server 2016, including in-memory performance across all workloads, mission-critical high availability, business intelligence (BI) and advanced analytics tools.

“We are building this broad, deep, digital data platform,” Nadella said. “This platform is going to help every business become a software business, a data business, an intelligence business. That’s our vision.”

Exec Scott Guthrie took the stage to discuss the new support for in-memory advanced analytics and noted that for these kinds of workloads, data pros can use the R programming language, which he described as the leading open source data science language in the industry. Coincidentally, Microsoft yesterday announced R Tools for Visual Studio for machine learning scenarios.

SQL Server on Linux
Providing one of the few real news announcements during the presentation, Guthrie also noted that a private preview of SQL Server on Linux is available today, following up onsurprising news earlier in the week that SQL Server was being ported to the open source Linux OS, which is expected to be completed in mid-2017. Guthrie said that unexpected move was part of the company’s strategy of bringing its products and services to a broader set of users and “to meet customers where they’re at.”

Another focus of the event was the new “Stretch Database” capability, exemplifying SQL Server 2016’s close connection to the Microsoft Azure cloud.

“SQL Server is also perhaps the world’s first cloud-bound database,” Sirosh said. “That means we build the features of SQL Server in the cloud first, ship them with Azure SQL DB, and customers have been experiencing it for six to nine months and a very large number of queries have been run against them.”

Sirosh expounded more on this notion in a companion blog post published during the event. “We built SQL Server 2016 for this new world, and to help businesses get ahead of today’s disruptions,” he said. “It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always encrypted query processing capabilities and in-memory transactions with persistence. It is also perhaps the world’s only relational database to be ‘born cloud-first,’ with the majority of features first deployed and tested in Azure, across 22 global datacenters and billions of requests per day. It is customer tested and battle ready.”

Stretch Database
Features shipped with SQL server, Sirosh said, “allow you to have wonderful hybrid capabilities, allowing your workload to span both on-premises and the cloud. So Strech Database is one of them. Data in a SQL Server, cold data, can be seamlessly migrated into databases in the cloud. So you have in effect a database of very large capacity, but it’s always queryable. It’s not just a backup. That data’s that’s migrated over is living in a database in the cloud, and when you issue queries to the on-premises database, that query is just transported to the cloud and the data comes back — perhaps a little slower, but all your data is still queryable.”

The new capabilities for querying data of all kinds in various stages and forms were a focal point for Sirosh.

“We have brought the ability to analyze data at incredible speed into the transactional database so you can do not only mission-critical transactional processing, but mission-critical analytic processing as well,” Sirosh said. “It is the database for building mission-critical intelligent applications without extracting and moving the data, and all the slowness that comes with doing so. So you can now build real-time applications that have sophisticated analytical intelligence behind them. That is the one thing that I would love all of you to take away from this presentation.”

 On-Demand Videos for More
At the Data Driven Web site, Microsoft has provided a comprehensive series of videos that explore various separate aspects of SQL Server, with topics ranging from “AlwaysOn Availability Groups enhancements in SQL Server 2016” to others on R services, in-memory OLTP, PolyBase, the Stretch Database, Always Encrypted and many more.

Still some attendees — virtual or otherwise — were disappointed by the lack of real significant news.

“Did this whole thing just finish without so much as a release date?” asked one viewer in a Tweet. “Sigh.”

 

 

Source : https://adtmag.com/Articles/2016/03/10/sql-server-2016.aspx

 

You, your network and the Locky virus

Last Monday, a new particularly clever (and nasty) piece ransomware appeared on the internet called Locky.

The malicious file went undetected by most anti-virus software for a number of days and even now a couple weeks since it appeared, antivirus products are still struggling to keep up, often taking upto 24 hours to include detection in their definition packages for each new daily iteration version of the virus.

This clearly has left users and company network exposed.

How it works :

It is initially spread through a Word doc embedded in an email. He is an example of one of those emails:

Attached to this email is a Word document containing an alleged Invoice.

If Office macros are enabled on this document – it unleashes an executable called :  ‘ladybi.exe’

This loads itself into memory then deletes itself. Whilst resident in memory – it encrypts your documents as hash.locky files, changes the desktop wallpaper, creates a .bmp file and opens it, creates a .txt file and opens it, and delete VSS snapshots. It can also reach out and encrypted files on your company network!

Once the files are encrypted, a ransom demand appears on the PC directing the user towards the the ‘Deep Web‘ to make a payment in Bitcoin to get your files decrypted.

Recovery

To recover your files you need to rely on you backups. It is thought unlikely that any kind of tool will become available to break the encryption algorithms. We do not recommend paying ransoms.

Identifying infected network users

If you see .locky extension files appearing on your network shares, look up the file owner on _Locky_recover_instructions.txt file in each folder. This will tell you the infected user. Lock their AD user and computer account immediately and boot them off the network — you will likely have to rebuild their PC from scratch.

Prevention

User education – do not open emails from unknown sources!

Disable Macro’s in office documents – this can be done on a network level via Group Policy 

Global spread

The deployment of Locky was a masterpiece of criminality — the infrastructure is highly developed, it was tested in the wild initially on a small scale (ransomware beta testing, basically), and the ransomware is translated into many languages. In short, this was well planned and expertly executed.

 

One hour of infection stats

Measuring the impact

Locky contains code to spread across network drives, allowing the potential to impact large enterprises outside of individual desktops.

Twitter impressions of over half a million this week from talking about this. It is thought many organisations are simply paying for the decrypter, which is basically paying your hostage takers for freedom. It’s also worth noting that many of the IP addresses getting hit by this are associated with addresses at large companies, many in the US; this clearly caught people out.

Sources:

https://medium.com
http://www.idigitaltimes.com

Microsoft Is Killing Support for Internet Explorer 8, 9 and 10 On January 12th

Microsoft is ending the support for Internet Explorer 8,9, and 10 on January 12th. This news has come as a breath of fresh air as it was considered a bane for many web developers, thanks to the endless security holes in the software.

On Tuesday, a new “End of Life” patch will go live that will ping the Internet Explorer users asking them to upgrade their browsers. This End of Life patch will mean that these older Internet Explorer versions will no longer get regular technical support and security fixes.

This step also means that Internet Explorer 11 is the last version of Microsoft’s vintage browser that’ll be supported. This patch will be delivered as a cumulative security update for these versions:

On Windows 7 Service Pack 1 and Windows 7 Service Pack 1 x64 Edition

  • Internet Explorer 10
  • Internet Explorer 9
  • Internet Explorer 8

On Windows Server 2008 R2 Service Pack 1 and Windows Server 2008 R2 Service Pack 1 x64 Edition

  • Internet Explorer 10
  • Internet Explorer 9
  • Internet Explorer 8

However, if you want to disable this update notification, follow these steps mentioned on Microsoft’s support page.

It’s expected that millions of users will choose to avoid these upgrade notifications, and thus will be prone to security risks. So, you are advisable to either upgrade your browsers, or switch to another web browser altogether.

Wireless Myths

Myth #1: “The only interference problems are from other 802.11 networks.”

Summary: The unlicensed band is an experiment by the FCC in unregulated spectrum sharing. The experiment has been a great success so far, but there are significant challenges posed by RF interference that need to be given proper attention.

 

Myth #2: “My network seems to be working, so interference must not be a problem.”

Summary: Interference is out there. It’s just a silent killer thus far.

 

Myth #3: “I did an RF sweep before deployment. So I found all the interference sources.”

Summary: You can’t sweep away the interference problem. Microwave ovens, cordless phones, Bluetooth devices, wireless video cameras, outdoor microwave links, wireless game controllers, Zigbee devices, fluorescent lights, WiMAX devices, and even bad electrical connections-all these things can cause broad RF spectrum emissions. These non-802.11 types of interference typically don’t work cooperatively with 802.11 devices.

 

Myth #4: “My infrastructure equipment automatically detects interference.”

Summary: Simple, automated-response-to-interference products are helpful, but they aren’t a substitute for understanding of the underlying problem.

 

Myth #5: “I can overcome interference by having a high density of access points.”

Summary: It’s reasonable to over-design your network for capacity, but a high density of access points is no panacea for interference.

 

Myth #6: “I can analyze interference problems with my packet sniffer.”

Summary: You need the right tool for analyzing interference. In the end, it’s critical that you be able to analyze the source of interference in order to determine the best course of action to handle the interference. In many cases, the best action will be removing the device from the premises.

 

Myth #7: “I have a wireless policy that doesn’t allow interfering devices into the premises.”

Summary: You have to expect that interfering devices will sneak onto your premises.

 

Myth #8: “There is no interference at 5 GHz.”

Summary: You can run, but you can’t hide.

 

Myth #9: “I’ll hire a consultant to solve any interference problems I run into.”

 

Summary: You can’t afford to rely on a third party to debug your network.

 

Myth #10: “I give up. RF is impossible to understand.”

Summary: The cavalry is here!

 

Myth #11: “Wi-Fi interference doesn’t happen very often.”

 

Summary: There’s no point burying your head in the sand: Wi-Fi interference happens.

 

Myth #12: “I should look for interference only after ruling out other problem sources.”

Summary: Avoid wasting your time. Fix your RF physical layer first.

 

Myth #13: “There’s nothing I can do about interference if I find it.”

Summary: There’s always a cure for interference, but you need to know what’s ailing you.

 

Myth #14: “There are just a few easy-to-find devices that can interfere with my Wi-Fi.”

Summary: You need the right tool to find interference fast, and it’s not a magnifying glass.

 

Myth #15: “When interference occurs, the impact on data is typically minor.”

Summary: Interference can really take the zip out of your Wi-Fi data throughput.

 

Myth #16 “Voice data rates are low, so the impact of interference on voice over Wi-Fi should be minimal.”

Summary: Can you hear me now? Voice over Wi-Fi and interference don’t mix.

 

Myth #17: “Interference is a performance problem, but not a security risk.”

Summary: RF security doesn’t stop with Wi-Fi. Do you know who is using your spectrum?

 

Myth #18: “802.11n and antenna systems will work around any interference issues.”

Summary: Antennas are a pain reliever, but far from a cure.

 

Myth #19: “My site survey tool can be used to find interference problems.”

Summary: Site survey tools measure coverage, but don’t solve your RF needs.

 

Windows 10 Major Update Highlights

  • Windows Update for Business enables control over the deployment of updates within organizations while ensuring devices are kept current and security needs are met, at reduced management cost. Features include setting up device groups with staggered deployments and scaling deployments with network optimizations.
  • Windows Store for Business provides a flexible way to find, acquire, manage and distribute both Windows Store apps and custom line of business apps to Windows 10 devices. Organizations can choose their preferred distribution method by directly assigning apps, publishing apps to a private store, or connecting with management solutions.
  • Mobile Device Management gives IT access to the full power of Enterprise Mobility Management to manage the entire family of Windows devices, including PCs, tablets, phones, and IOT. Windows 10 is the only platform that can manage BYOD scenarios from the device to the apps to the data on those devices – safely and securely. And of course, Windows 10 is fully compatible with the existing management infrastructure used with PCs, giving IT control over how they bridge between two capabilities.
  • Azure Active Directory Join allows IT to maintain one directory, enabling people to have one login and securely roam their Windows settings and data across all of their Windows 10 devices. AAD Join also enables any machine to become enterprise-ready with a few simple clicks by anyone in the organization.

Windows 10 Upgrade Path

Now that Windows 10 Version 1511 (first major patch) is out, we can look at potential upgrade paths for the OS.  For those of you that didn’t know, this version allows for the use of keys from Windows 7/8 during the installation of Windows 10.

Win10Upgrade