Data professionals might have been expecting a launch date for SQL Server 2016 at the Data Driven event held today in New York City, but what they got was a recap of the flagship database system’s capabilities and a full-out assault on rival Oracle Corp.
Exec Judson Althoff detailed a SQL Server 2016/Oracle comparison involving a scenario where various capabilities built into SQL Server 2016 were matched up against the Oracle database. “When we say everything’s built in, everything’s built in,” he said. When the built-in capabilities were pitted against similar functionality offered by Oracle products, “Oracle is nearly 12 times more expensive,” he said.
That specific scenario was envisioned with a project starting from scratch. Althoff said not everybody does that, as they have invested in “other technologies.”
Free Licenses for Oracle Switchers
“So if you are willing to migrate off of Oracle, we will actually give you free SQL Server licenses to do so,” Althoff said in his presentation. “For every instance of Oracle you have, free SQL Server licenses. All you have to do is have a Software Assurance agreement with Microsoft. If you’re willing to take this journey with us before the end of June, we’ll actually help and invest in the migration costs, put engineers on the ground to help you migrate off of Oracle.”
He noted that in the wake of some newspaper ads about the offer, he received e-mails asking just who was eligible. “Everyone is eligible for this,” Althoff said. “We’re super excited to help you migrate off of Oracle technology, lower your overall data processing costs and actually really be enabled and empowered to build the data estate that we’ve been talking about.”
More details on the offer were unveiled in a ” Break free from Oracle ” page on the Microsoft site. “This offer includes support services to kick-start your migration, and access to our SQL Server Essentials for the Oracle Database Administrator training,” the site says. “Dive into key features of SQL Server through hands-on labs and instructor-led demos, and learn how to deploy your applications — on-premises or in the cloud.”
Microsoft also went after Oracle on the security front, citing information published by the National Institute of Standards and Technology that lists databases and their vulnerabilities. On average, over the past few years, exec Joseph Sirosh said in his presentation, SQL Server was found to have 1/10th the vulnerabilities of Oracle.
Always Encrypted
Sirosh also highlighted new security capabilities of SQL Server 2016. “In SQL Server 2016, for the first time, you will hear about a capability that we call Always Encrypted,” he said. “This is about securing data all the way from the client, into the database and keeping it secure even when query processing is being done. At the database site, the data is never decrypted, even in memory, and you can still do queries over it.”
He explained that data is encrypted at the client, and sent to the database in its encrypted form, in which it remains even during query processing. No one can decrypt credit card data, for example, while it’s in the database, not even a DBA. “That’s what you want,” Sirosh said of the functionality enabled by homomorphic encryption.
During today’s event, Microsoft CEO Satya Nadella and other presenters focused on a series of customer success videos and live presentations, reflecting Nadella’s belief that Microsoft “shouldn’t have launch events, but customer success events.”
Those success stories leveraged new ground-breaking capabilities of SQL Server 2016, including in-memory performance across all workloads, mission-critical high availability, business intelligence (BI) and advanced analytics tools.
“We are building this broad, deep, digital data platform,” Nadella said. “This platform is going to help every business become a software business, a data business, an intelligence business. That’s our vision.”
Exec Scott Guthrie took the stage to discuss the new support for in-memory advanced analytics and noted that for these kinds of workloads, data pros can use the R programming language, which he described as the leading open source data science language in the industry. Coincidentally, Microsoft yesterday announced R Tools for Visual Studio for machine learning scenarios.
SQL Server on Linux
Providing one of the few real news announcements during the presentation, Guthrie also noted that a private preview of SQL Server on Linux is available today, following up onsurprising news earlier in the week that SQL Server was being ported to the open source Linux OS, which is expected to be completed in mid-2017. Guthrie said that unexpected move was part of the company’s strategy of bringing its products and services to a broader set of users and “to meet customers where they’re at.”
Another focus of the event was the new “Stretch Database” capability, exemplifying SQL Server 2016’s close connection to the Microsoft Azure cloud.
“SQL Server is also perhaps the world’s first cloud-bound database,” Sirosh said. “That means we build the features of SQL Server in the cloud first, ship them with Azure SQL DB, and customers have been experiencing it for six to nine months and a very large number of queries have been run against them.”
Sirosh expounded more on this notion in a companion blog post published during the event. “We built SQL Server 2016 for this new world, and to help businesses get ahead of today’s disruptions,” he said. “It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always encrypted query processing capabilities and in-memory transactions with persistence. It is also perhaps the world’s only relational database to be ‘born cloud-first,’ with the majority of features first deployed and tested in Azure, across 22 global datacenters and billions of requests per day. It is customer tested and battle ready.”
Stretch Database
Features shipped with SQL server, Sirosh said, “allow you to have wonderful hybrid capabilities, allowing your workload to span both on-premises and the cloud. So Strech Database is one of them. Data in a SQL Server, cold data, can be seamlessly migrated into databases in the cloud. So you have in effect a database of very large capacity, but it’s always queryable. It’s not just a backup. That data’s that’s migrated over is living in a database in the cloud, and when you issue queries to the on-premises database, that query is just transported to the cloud and the data comes back — perhaps a little slower, but all your data is still queryable.”
The new capabilities for querying data of all kinds in various stages and forms were a focal point for Sirosh.
“We have brought the ability to analyze data at incredible speed into the transactional database so you can do not only mission-critical transactional processing, but mission-critical analytic processing as well,” Sirosh said. “It is the database for building mission-critical intelligent applications without extracting and moving the data, and all the slowness that comes with doing so. So you can now build real-time applications that have sophisticated analytical intelligence behind them. That is the one thing that I would love all of you to take away from this presentation.”
On-Demand Videos for More
At the
Data Driven Web site, Microsoft has provided a comprehensive series of videos that explore various separate aspects of SQL Server, with topics ranging from “AlwaysOn Availability Groups enhancements in SQL Server 2016” to others on R services, in-memory OLTP, PolyBase, the Stretch Database, Always Encrypted and many more.
Still some attendees — virtual or otherwise — were disappointed by the lack of real significant news.
“Did this whole thing just finish without so much as a release date?” asked one viewer in a Tweet. “Sigh.”
Source : https://adtmag.com/Articles/2016/03/10/sql-server-2016.aspx
The Rise in Crypto Ransomware
In recent years, we have seen a significant growth in Malware. With enablers such as Bitcoin, RSA 2048-bit encryption, and the TOR network, NetCal predicts there will continue to be a significant rise in Crypto Ransomware. The use of these malicious applications are morphing as we speak. Originally, they were to gain access to computers and steal data (ie spying/snooping). Then it was for ad clicks from popups. Now, malware has taken on the purpose of extorting money directly from the users themselves. Although this shouldn’t be a surprise to anyone, the tools mentioned above makes it a lot easier to achieve success.
Most Crypto Ransomware use the following tactics:
For example, {AB8902B4-09CA-4BB6-B78D-A8F59079A8D5} causes any file in the LocalServer32 subkey to be run any time a folder is opened. By hijacking this CLSID, Poweliks is able to ensure that its registry entry will be launched any time a folder is opened or new thumbnails are created, even if the Watchdog process has been terminated.
10 Prevention Tips:
Windows 10 – To Upgrade or Not to Upgrade
Since last year, we’ve been telling our clients to hold off on upgrading. We even used Group Policy and our Management Agents to disable the upgrade patch. It’s been a long and treacherous journey, but we finally believe Windows 10 is ready for Prime Time. We’ve even seen it increase performance in some older machines. We are now recommending our clients to upgrade to Windows 10 to take advantage of the free licensing and extended support for the OS. With all the major bugs fixed, we’re confident you will find it to be stable and useful. Applications are also compatible more often than not. In fact, all of NetCal’s employees are now on Windows 10. We did all the testing so our clients don’t have to worry.
Contact us so we can evaluate your environment.
Q: If I upgrade, can I use Windows 7/8/8.1 again?
A: You can always reinstall using existing media or downgrade using the built-in Windows 10 recovery process (only works for 1 month after upgrade).
Q: What if I don’t upgrade in time? How much would a Windows 10 license cost then?
A: Although Microsoft has been rather vague thus far, the general consensus would be that the license would cost $120 for Win10Home and $200 for Win10Pro.
Q: How would I upgrade after the expiration date?
A: For those that fail to upgrade in time or simply chose not to, Windows 10 can be purchased via the Microsoft Store or through Retail Partners.
Q: If I need to reinstall Windows 10, what key can I use?
A: All Windows 7 and Windows 8/8.1 keys will work with the latest Windows 10 installation media.
Q: If I upgrade, will I be charged a subscription service fee after that?
A: According to Microsoft, if you upgrade before July 29th, Windows 10 will continue to be free and supported for the rest of the life of the device. This is also similar to how your OEM Windows licenses work.
Save backup storage using Veeam Backup & Replication BitLooker
Introduction
When you need to back up large amounts of data, you want to use up as little disk space as possible in order to minimize backup storage costs. However, with host-based image-level backups, traditional technologies force you to back up the entire virtual machine (VM) image, which presents multiple challenges that were never problems for classic agent-based backups.
For example, during backup analysis using Veeam ONE, you might notice that some VM backups are larger than the actual disk space usage in guest OS, resulting in higher-than-planned backup repository consumption. Most commonly, this phenomenon can be observed with file servers or other systems where a lot of data is deleted without being replaced with new data.
Another big sink for repository disk space consumption is useless files. While you might not need to back up data stored in certain files or directories in the first place, image-level backups force you to do this.
“Deleted” does not necessarily mean actually deleted
It is widely known that in vast majority of most modern file systems deleted files do not disappear from the hard drive completely. The file will only be flagged as deleted in the file allocation table (FAT) of the file system (e.g., the master file table (MFT) in case of NTFS). However, the file’s data will continue to exist on the hard drive until it is overwritten by a new file. This is exactly what makes tools like Undelete even possible. In order to reset the content of those blocks, you have to use tools like SDelete by Windows Sysinternals. This tool effectively overwrites the content of blocks belonging to deleted files with zeroes. Most backup solutions will then dedupe and/or compress these zeroed blocks so they do not take any extra disk space in the backup. However, running SDelete periodically on all your VMs is time consuming and hardly doable when you have hundreds of VMs, so most users simply don’t do this and allow blocks belonging to the deleted files to remain in the backup.
Another drawback of using SDelete is that it will inflate thin-provisioned virtual disks and will require you to use technologies such as VMware Storage vMotion to deflate them after SDelete processing. See VMware KB 2004155 for more information.
Finally, these tools must be used with caution. Because SDelete creates a very big zeroed file, you have to be careful not to affect other production applications on the processed server because that file is temporarily consuming all available free disk space on the volume.
Not backing up useless files in the first place
It goes without saying that there are certain files and directories that you don’t want to back up at all (e.g., application logs, application caches, temporary export files or user directories with personal files). There also might be data protection regulations in place that actually require you to exclude specific objects from backup. However, until today, the only way for most VM backup solutions to filter out useless data was to manually move useless data on every VM to the dedicated virtual drives (VMDK/VHDX) and exclude those virtual drives from processing. Again, because it’s simply not feasible to maintain this approach in large environments with dozens of new VMs appearing daily, most users simply accepted the need to back up useless data with image-based backups as a fact of life.
Meet Veeam BitLooker
Veeam BitLooker is the patent-pending data reduction technology from Veeam that allows the efficient and fully automated exclusion of deleted file blocks and useless files, thus enabling you to save considerable amount of backup storage and network bandwidth and further reduce costs.
The first part of BitLooker was introduced in Veeam Backup & Replication back a few years ago and enabled the exclusion of the swap file blocks from processing. Considering that each VM creates a swap file, which is usually at least 2 GB in size and changes daily, this is a considerable amount of data that noticeably affects full and incremental backup size. However, BitLooker automatically detects the swap file location and determines the blocks backing it in the corresponding VMDK. These blocks are then automatically excluded from processing, replaced with zeroed blocks in the target image and are not stored in a backup file or transferred to a replica image. The resulting savings are easy to see!
BitLooker in v9
In Veeam Backup & Replication v9, BitLooker’s capabilities have extended considerably in order to further improve data reduction ratios. In Veeam Backup & Replication v9, BitLooker has now three distinct capabilities:
In v9, BitLooker supports NTFS-formatted volumes only. Most of BitLooker is available right in the Veeam Backup & Replication Standard edition. However, excluding user-specified files and folders requires at least Enterprise edition.
Configuring BitLooker
There are a few options for controlling BitLooker in v9. You can find the first two in the advanced settings of each backup and replication job.
Note that the option to exclude swap file blocks was available in previous product versions, but it was enhanced in v9 to also exclude hibernation files.
Now, there is the new option that enables the exclusion of deleted file blocks:
Users upgrading from previous versions will note that by default, deleted file blocks exclusion remains disabled for existing jobs after upgrading so it doesn’t not alter their existing behavior. You can enable it manually for individual jobs or automatically for all existing jobs with this PowerShell script.
In most cases, you should only expect to see minor backup file size reduction after enabling deleted file blocks exclusion. This is because in the majority of server workloads, data is never simply deleted, but rather always overwritten with new data. More often than not, it is replaced with more data than what was deleted, which is the very reason the world’s data almost doubles every 2 years. However, in certain scenarios (such as those involving data migrations), the gains can be quite dramatic.
Finally, in v9, BitLooker also allows you to configure the exclusion of specific files and folders for each backup job. Unlike previous options, this functionality is a part of the application-aware guest processing logic, and exclusions can only be performed on a running VM. Correspondingly, you can find the file exclusion settings in the advanced settings of guest processing step of the job wizard. You have the option to either exclude specific file system objects or, conversely, back up nothing but specific objects:
When using this functionality, keep in mind that it increases both VM processing time and memory consumption by the data mover, depending on the amount of excluded files. For example, if processing exclusions for 10,000 files takes less than 10 seconds and requires just 50MB of extra RAM, then excluding 100,000 files takes 2 minutes and requires almost 400MB of extra RAM.
Summary
Veeam BitLooker offers users the possibility to further reduce backup storage and network bandwidth consumption without incurring additional costs. Enabling this functionality takes just a few clicks, and the data reduction benefits can be enjoyed in the immediate backup or replication job run.
What results are you seeing after enabling BitLooker in v9? Please share your numbers in the comments!
Re-posted from : https://www.veeam.com/blog/save-backup-storage-using-veeam-backup-replication-bitlooker.html
Microsoft on Upcoming SQL Server 2016; Goes After Oracle
Data professionals might have been expecting a launch date for SQL Server 2016 at the Data Driven event held today in New York City, but what they got was a recap of the flagship database system’s capabilities and a full-out assault on rival Oracle Corp.
Exec Judson Althoff detailed a SQL Server 2016/Oracle comparison involving a scenario where various capabilities built into SQL Server 2016 were matched up against the Oracle database. “When we say everything’s built in, everything’s built in,” he said. When the built-in capabilities were pitted against similar functionality offered by Oracle products, “Oracle is nearly 12 times more expensive,” he said.
That specific scenario was envisioned with a project starting from scratch. Althoff said not everybody does that, as they have invested in “other technologies.”
Free Licenses for Oracle Switchers
“So if you are willing to migrate off of Oracle, we will actually give you free SQL Server licenses to do so,” Althoff said in his presentation. “For every instance of Oracle you have, free SQL Server licenses. All you have to do is have a Software Assurance agreement with Microsoft. If you’re willing to take this journey with us before the end of June, we’ll actually help and invest in the migration costs, put engineers on the ground to help you migrate off of Oracle.”
More details on the offer were unveiled in a ” Break free from Oracle ” page on the Microsoft site. “This offer includes support services to kick-start your migration, and access to our SQL Server Essentials for the Oracle Database Administrator training,” the site says. “Dive into key features of SQL Server through hands-on labs and instructor-led demos, and learn how to deploy your applications — on-premises or in the cloud.”
Microsoft also went after Oracle on the security front, citing information published by the National Institute of Standards and Technology that lists databases and their vulnerabilities. On average, over the past few years, exec Joseph Sirosh said in his presentation, SQL Server was found to have 1/10th the vulnerabilities of Oracle.
Always Encrypted
Sirosh also highlighted new security capabilities of SQL Server 2016. “In SQL Server 2016, for the first time, you will hear about a capability that we call Always Encrypted,” he said. “This is about securing data all the way from the client, into the database and keeping it secure even when query processing is being done. At the database site, the data is never decrypted, even in memory, and you can still do queries over it.”
He explained that data is encrypted at the client, and sent to the database in its encrypted form, in which it remains even during query processing. No one can decrypt credit card data, for example, while it’s in the database, not even a DBA. “That’s what you want,” Sirosh said of the functionality enabled by homomorphic encryption.
During today’s event, Microsoft CEO Satya Nadella and other presenters focused on a series of customer success videos and live presentations, reflecting Nadella’s belief that Microsoft “shouldn’t have launch events, but customer success events.”
Those success stories leveraged new ground-breaking capabilities of SQL Server 2016, including in-memory performance across all workloads, mission-critical high availability, business intelligence (BI) and advanced analytics tools.
“We are building this broad, deep, digital data platform,” Nadella said. “This platform is going to help every business become a software business, a data business, an intelligence business. That’s our vision.”
Exec Scott Guthrie took the stage to discuss the new support for in-memory advanced analytics and noted that for these kinds of workloads, data pros can use the R programming language, which he described as the leading open source data science language in the industry. Coincidentally, Microsoft yesterday announced R Tools for Visual Studio for machine learning scenarios.
SQL Server on Linux
Providing one of the few real news announcements during the presentation, Guthrie also noted that a private preview of SQL Server on Linux is available today, following up onsurprising news earlier in the week that SQL Server was being ported to the open source Linux OS, which is expected to be completed in mid-2017. Guthrie said that unexpected move was part of the company’s strategy of bringing its products and services to a broader set of users and “to meet customers where they’re at.”
Another focus of the event was the new “Stretch Database” capability, exemplifying SQL Server 2016’s close connection to the Microsoft Azure cloud.
“SQL Server is also perhaps the world’s first cloud-bound database,” Sirosh said. “That means we build the features of SQL Server in the cloud first, ship them with Azure SQL DB, and customers have been experiencing it for six to nine months and a very large number of queries have been run against them.”
Sirosh expounded more on this notion in a companion blog post published during the event. “We built SQL Server 2016 for this new world, and to help businesses get ahead of today’s disruptions,” he said. “It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always encrypted query processing capabilities and in-memory transactions with persistence. It is also perhaps the world’s only relational database to be ‘born cloud-first,’ with the majority of features first deployed and tested in Azure, across 22 global datacenters and billions of requests per day. It is customer tested and battle ready.”
Stretch Database
Features shipped with SQL server, Sirosh said, “allow you to have wonderful hybrid capabilities, allowing your workload to span both on-premises and the cloud. So Strech Database is one of them. Data in a SQL Server, cold data, can be seamlessly migrated into databases in the cloud. So you have in effect a database of very large capacity, but it’s always queryable. It’s not just a backup. That data’s that’s migrated over is living in a database in the cloud, and when you issue queries to the on-premises database, that query is just transported to the cloud and the data comes back — perhaps a little slower, but all your data is still queryable.”
The new capabilities for querying data of all kinds in various stages and forms were a focal point for Sirosh.
“We have brought the ability to analyze data at incredible speed into the transactional database so you can do not only mission-critical transactional processing, but mission-critical analytic processing as well,” Sirosh said. “It is the database for building mission-critical intelligent applications without extracting and moving the data, and all the slowness that comes with doing so. So you can now build real-time applications that have sophisticated analytical intelligence behind them. That is the one thing that I would love all of you to take away from this presentation.”
At the Data Driven Web site, Microsoft has provided a comprehensive series of videos that explore various separate aspects of SQL Server, with topics ranging from “AlwaysOn Availability Groups enhancements in SQL Server 2016” to others on R services, in-memory OLTP, PolyBase, the Stretch Database, Always Encrypted and many more.
Still some attendees — virtual or otherwise — were disappointed by the lack of real significant news.
“Did this whole thing just finish without so much as a release date?” asked one viewer in a Tweet. “Sigh.”
Source : https://adtmag.com/Articles/2016/03/10/sql-server-2016.aspx
You, your network and the Locky virus
Last Monday, a new particularly clever (and nasty) piece ransomware appeared on the internet called Locky.
The malicious file went undetected by most anti-virus software for a number of days and even now a couple weeks since it appeared, antivirus products are still struggling to keep up, often taking upto 24 hours to include detection in their definition packages for each new daily iteration version of the virus.
This clearly has left users and company network exposed.
How it works :
It is initially spread through a Word doc embedded in an email. He is an example of one of those emails:
Attached to this email is a Word document containing an alleged Invoice.
If Office macros are enabled on this document – it unleashes an executable called : ‘ladybi.exe’
This loads itself into memory then deletes itself. Whilst resident in memory – it encrypts your documents as hash.locky files, changes the desktop wallpaper, creates a .bmp file and opens it, creates a .txt file and opens it, and delete VSS snapshots. It can also reach out and encrypted files on your company network!
Once the files are encrypted, a ransom demand appears on the PC directing the user towards the the ‘Deep Web‘ to make a payment in Bitcoin to get your files decrypted.
Recovery
To recover your files you need to rely on you backups. It is thought unlikely that any kind of tool will become available to break the encryption algorithms. We do not recommend paying ransoms.
Identifying infected network users
If you see .locky extension files appearing on your network shares, look up the file owner on _Locky_recover_instructions.txt file in each folder. This will tell you the infected user. Lock their AD user and computer account immediately and boot them off the network — you will likely have to rebuild their PC from scratch.
Prevention
User education – do not open emails from unknown sources!
Disable Macro’s in office documents – this can be done on a network level via Group Policy
Global spread
The deployment of Locky was a masterpiece of criminality — the infrastructure is highly developed, it was tested in the wild initially on a small scale (ransomware beta testing, basically), and the ransomware is translated into many languages. In short, this was well planned and expertly executed.
One hour of infection stats
Measuring the impact
Locky contains code to spread across network drives, allowing the potential to impact large enterprises outside of individual desktops.
Twitter impressions of over half a million this week from talking about this. It is thought many organisations are simply paying for the decrypter, which is basically paying your hostage takers for freedom. It’s also worth noting that many of the IP addresses getting hit by this are associated with addresses at large companies, many in the US; this clearly caught people out.
Sources:
https://medium.com
http://www.idigitaltimes.com
Microsoft Is Killing Support for Internet Explorer 8, 9 and 10 On January 12th
Microsoft is ending the support for Internet Explorer 8,9, and 10 on January 12th. This news has come as a breath of fresh air as it was considered a bane for many web developers, thanks to the endless security holes in the software.
On Tuesday, a new “End of Life” patch will go live that will ping the Internet Explorer users asking them to upgrade their browsers. This End of Life patch will mean that these older Internet Explorer versions will no longer get regular technical support and security fixes.
This step also means that Internet Explorer 11 is the last version of Microsoft’s vintage browser that’ll be supported. This patch will be delivered as a cumulative security update for these versions:
On Windows 7 Service Pack 1 and Windows 7 Service Pack 1 x64 Edition
On Windows Server 2008 R2 Service Pack 1 and Windows Server 2008 R2 Service Pack 1 x64 Edition
However, if you want to disable this update notification, follow these steps mentioned on Microsoft’s support page.
It’s expected that millions of users will choose to avoid these upgrade notifications, and thus will be prone to security risks. So, you are advisable to either upgrade your browsers, or switch to another web browser altogether.
Wireless Myths
Myth #1: “The only interference problems are from other 802.11 networks.”
Summary: The unlicensed band is an experiment by the FCC in unregulated spectrum sharing. The experiment has been a great success so far, but there are significant challenges posed by RF interference that need to be given proper attention.
Myth #2: “My network seems to be working, so interference must not be a problem.”
Summary: Interference is out there. It’s just a silent killer thus far.
Myth #3: “I did an RF sweep before deployment. So I found all the interference sources.”
Summary: You can’t sweep away the interference problem. Microwave ovens, cordless phones, Bluetooth devices, wireless video cameras, outdoor microwave links, wireless game controllers, Zigbee devices, fluorescent lights, WiMAX devices, and even bad electrical connections-all these things can cause broad RF spectrum emissions. These non-802.11 types of interference typically don’t work cooperatively with 802.11 devices.
Myth #4: “My infrastructure equipment automatically detects interference.”
Summary: Simple, automated-response-to-interference products are helpful, but they aren’t a substitute for understanding of the underlying problem.
Myth #5: “I can overcome interference by having a high density of access points.”
Summary: It’s reasonable to over-design your network for capacity, but a high density of access points is no panacea for interference.
Myth #6: “I can analyze interference problems with my packet sniffer.”
Summary: You need the right tool for analyzing interference. In the end, it’s critical that you be able to analyze the source of interference in order to determine the best course of action to handle the interference. In many cases, the best action will be removing the device from the premises.
Myth #7: “I have a wireless policy that doesn’t allow interfering devices into the premises.”
Summary: You have to expect that interfering devices will sneak onto your premises.
Myth #8: “There is no interference at 5 GHz.”
Summary: You can run, but you can’t hide.
Myth #9: “I’ll hire a consultant to solve any interference problems I run into.”
Myth #10: “I give up. RF is impossible to understand.”
Summary: The cavalry is here!
Myth #11: “Wi-Fi interference doesn’t happen very often.”
Myth #12: “I should look for interference only after ruling out other problem sources.”
Summary: Avoid wasting your time. Fix your RF physical layer first.
Myth #13: “There’s nothing I can do about interference if I find it.”
Summary: There’s always a cure for interference, but you need to know what’s ailing you.
Myth #14: “There are just a few easy-to-find devices that can interfere with my Wi-Fi.”
Summary: You need the right tool to find interference fast, and it’s not a magnifying glass.
Myth #15: “When interference occurs, the impact on data is typically minor.”
Summary: Interference can really take the zip out of your Wi-Fi data throughput.
Myth #16 “Voice data rates are low, so the impact of interference on voice over Wi-Fi should be minimal.”
Summary: Can you hear me now? Voice over Wi-Fi and interference don’t mix.
Myth #17: “Interference is a performance problem, but not a security risk.”
Summary: RF security doesn’t stop with Wi-Fi. Do you know who is using your spectrum?
Myth #18: “802.11n and antenna systems will work around any interference issues.”
Summary: Antennas are a pain reliever, but far from a cure.
Myth #19: “My site survey tool can be used to find interference problems.”
Summary: Site survey tools measure coverage, but don’t solve your RF needs.
Windows 10 Major Update Highlights
Windows 10 Upgrade Path
Now that Windows 10 Version 1511 (first major patch) is out, we can look at potential upgrade paths for the OS. For those of you that didn’t know, this version allows for the use of keys from Windows 7/8 during the installation of Windows 10.
Veeam v9 New Features
From: http://blog.mwpreston.net/2015/11/09/veeam-v9-what-we-know-so-far/
Unlimited Scale-out Backup Repository
This is perhaps one the biggest features included within v9 – all to often we see environments over provision the storage for their backup repositories – you never know when we might get a large delta or incremental and the last thing we want to have to do is go through the process of running out of space and having to provision more. In the end we are left with a ton of unused and wasted capacity, and when we need more instead of utilizing what we have we simply buy more – not efficient in terms of capacity and budget management. This is a problem that Veeam is looking to solve in v9 with their Unlimited Scale-out Backup Repository functionality. In a nutshell the scale-out backup repo will take all of those individual backup repositories you have now and group them into a single entity or pool of storage. From there, we can simply select this global pool of storage as our target rather than an individual repository. Veeam can then chose the best location to place your backup files within the pool depending on the functionalities and user-defined roles each member of the pool is assigned. In an essence it’s a software defined storage play, only targeted at backup repositories – gone are the days of worrying about which repository to assign to which job – everybody in the pool!
More Snapshot/Repository integration.
Backup and restore from storage snapshots is no doubt a more efficient way to process your backups. Just as Veeam has added support for HP 3PAR/StorVirtual and NetApp, we are now seeing EMC Dell thrown into that mix. As of v9 we will now be able to leverage storage snapshots on EMC VNX/VNXe arrays to process our backup and restores directly from Veeam Backup and Replication – minimizing impact on our production storage and allowing us to keep more restore points, processing them faster and truly providing us with the ability to have < 15 minutes RTPO.
On the repository end of things we’ve seen the integration provided for DataDomain and Exagrid – as of v9 we can throw HP StoreOnce Catalyst into that mix. Having a tighter integration between Veeam and the StoreOnce deduplication appliance provides a number of enhancements in terms of performance to your backups and restores. First off you will see efficiencies in copying data over slower links due to the source side deduplication that StoreOnce provides. StoreOnce can also create synthetic full backups by performing only meta data operations, eliminating the need to actual perform a copy of the data during the synthetic creation, which in turns provides efficiency to a very high i/o intensive operation. And of course, creating repositories for Veeam backups on the StoreOnce Catalyst can be done directly from within Veeam Backup & Replication, without the need to jump into separate management tools or UIs.
Cloud connect replication
Last year Veeam announced the Cloud Connect program which essentially allows partners to become somewhat of a service provider for their customers looking to ship their Veeam backups offsite. Well, it’s 2015 now and we now can see that the same type of cloud connect technology now is available for replication. Shipping backups offsite was a great feature, but honestly, being able to provide customers with a simple way to replicate their VMs offsite is ground breaking. Disaster Recovery is a process and technology that is simply out of reach for a lot of business – there isn’t the budget set aside for a secondary site, let alone extra hardware sitting at that site essentially doing nothing. Now customers are able to simply leverage a Veeam Cloud/Service Provider and replicate their VMs on a subscription based process to their data center.
DirectNFS
When VMware introduced the VMware API’s for Data Protection (VADP) it was ground breaking in what it provided vendors such as Veeam the ability to do in terms of backup VADP is the grounds to how Veeam accesses data in their Direct SAN transport mode, allowing data to be simply transferred directly from the SAN to the Veeam Backup and Replication console. That said VADP is only supported on block transports, limiting Direct SAN to just iSCSI and Fibre Channel. In true Veeam fashion when they see an opportunity to innovate and develop functionality where it may be lacking they do so. As of v9 we will now be able to leverage Direct SAN mode on our NFS arrays using a technology called DirectNFS. DirectNFS will allow the VBR console server to directly mount to our NFS exports, allowing Veeam to process the data directly from the SAN, leaving the ESXi hosts to do what they do best – run production!
On-Demand Sandbox for Storage Snapshots
The opportunities that vPower and Virtual Labs have brought to organizations has been endless. Having the ability to spin up exact duplicates of our production environments, running them directly from our deduplicated backup files has solved many issues around patch testing, application upgrades, etc. That said up until now we could only use backup files as the grounds for getting access to these VMs – starting with v9 we can now leverage storage snapshots on supported arrays (HP, EMC, NetApp) to create completely isolated copies of the data that resides on them. This is huge for those organizations that leverage Virtual Labs frequently to perform testing of code or training. Instead of waiting for backups to occur we could technically have a completely isolated testing sandbox spun up using Storage Snapshots in essentially, minutes. A very awesome feature in my opinion.
ROBO Enhancements
Those customers who currently use Veeam and have multiple locations we will be happy to hear about some of the enhancements that v9 has centering around Remote/Branch Offices. A typical configuration in deploying Veeam is to have a centralized console controlling the backups at all of our remote locations. In v8, even if you had a remote proxy and repository located at the remote office, all the guest interaction traffic was forced to traverse your WAN as it was communicated directly from the centralized console. In v9 things have changed – a new Guest Interaction Proxy can be deployed which will handle then handle this type of traffic. When placed at the remote location, only simple commands will be sent across the WAN from the centralized console to the new GIP, which will in turn facilitate the backup of the remote VMs, thus saving on bandwidth and providing more room for, oh, I don’t know, this little thing called production.
When it comes to recovery things have also drastically changed. In v8 when we performed a file-level recovery the data actually had to traverse our WAN twice – once when the centralized backup console pulled the data, then again as it pushed it back out to it’s remote target – not ideal by any means. Within v9 we can now designate and remote Windows server as a mount server for that remote location – when a File-level recovery is initiated the Mount Server can now handle the processing of the files rather than the backup console, saving again on bandwidth and time.
Standalone Console
“Veeam Backup & Replication console is already running” <- Any true Veeam end-user is sure to have seen this message at one time or another, forcing us to either find and kill the process or yell at someone to log off. As of v9 the Veeam Backup & Replication console has now been broken out from the Veeam Backup & Replication server, meaning we can install a client on our laptops in order to access Veeam. This is not a technical change in nature, but honestly this is one of my favorite v9 features. I have a lot of VBR consoles and am just sick of having all those RDP sessions open – this alone is enough to force me to upgrade to VBR v9 .
Per-VM backup files
The way Veeam is storing our backup files is coming with another option in version 9. Instead of having one large backup file that contains multiple VMs we can now enable what is called a “Per-VM backup file chain” option. What this does store each VMs restore points within the job in their own dedicated backup file. Some advantages to this? Think about writing multiple streams inside of parallel processing mode into our repositories – this technically should increase the performance of our backup jobs. Certainly this sounds like an option you may only want to use if your repository provides deduplication as you would lose the deduplication provided job-wide by Veeam if you have enabled this.
New and improved Explorers
The Veeam Explorers are awesome, allowing us to restore individual application objects from our backup files depending on what application is inside it. Well, with v9 we have one new explorer as well as some great improvements to the existing ones.