Microsoft on Upcoming SQL Server 2016; Goes After Oracle

Data professionals might have been expecting a launch date for SQL Server 2016 at the Data Driven event held today in New York City, but what they got was a recap of the flagship database system’s capabilities and a full-out assault on rival Oracle Corp.

Exec Judson Althoff detailed a SQL Server 2016/Oracle comparison involving a scenario where various capabilities built into SQL Server 2016 were matched up against the Oracle database. “When we say everything’s built in, everything’s built in,” he said. When the built-in capabilities were pitted against similar functionality offered by Oracle products, “Oracle is nearly 12 times more expensive,” he said.

That specific scenario was envisioned with a project starting from scratch. Althoff said not everybody does that, as they have invested in “other technologies.”

Free Licenses for Oracle Switchers
“So if you are willing to migrate off of Oracle, we will actually give you free SQL Server licenses to do so,” Althoff said in his presentation. “For every instance of Oracle you have, free SQL Server licenses. All you have to do is have a Software Assurance agreement with Microsoft. If you’re willing to take this journey with us before the end of June, we’ll actually help and invest in the migration costs, put engineers on the ground to help you migrate off of Oracle.”

 He noted that in the wake of some newspaper ads about the offer, he received e-mails asking just who was eligible. “Everyone is eligible for this,” Althoff said. “We’re super excited to help you migrate off of Oracle technology, lower your overall data processing costs and actually really be enabled and empowered to build the data estate that we’ve been talking about.”

More details on the offer were unveiled in a ” Break free from Oracle ” page on the Microsoft site. “This offer includes support services to kick-start your migration, and access to our SQL Server Essentials for the Oracle Database Administrator training,” the site says. “Dive into key features of SQL Server through hands-on labs and instructor-led demos, and learn how to deploy your applications — on-premises or in the cloud.”

Microsoft also went after Oracle on the security front, citing information published by the National Institute of Standards and Technology that lists databases and their vulnerabilities. On average, over the past few years, exec Joseph Sirosh said in his presentation, SQL Server was found to have 1/10th the vulnerabilities of Oracle.

Always Encrypted
Sirosh also highlighted new security capabilities of SQL Server 2016. “In SQL Server 2016, for the first time, you will hear about a capability that we call Always Encrypted,” he said. “This is about securing data all the way from the client, into the database and keeping it secure even when query processing is being done. At the database site, the data is never decrypted, even in memory, and you can still do queries over it.”

He explained that data is encrypted at the client, and sent to the database in its encrypted form, in which it remains even during query processing. No one can decrypt credit card data, for example, while it’s in the database, not even a DBA. “That’s what you want,” Sirosh said of the functionality enabled by homomorphic encryption.

During today’s event, Microsoft CEO Satya Nadella and other presenters focused on a series of customer success videos and live presentations, reflecting Nadella’s belief that Microsoft “shouldn’t have launch events, but customer success events.”

Those success stories leveraged new ground-breaking capabilities of SQL Server 2016, including in-memory performance across all workloads, mission-critical high availability, business intelligence (BI) and advanced analytics tools.

“We are building this broad, deep, digital data platform,” Nadella said. “This platform is going to help every business become a software business, a data business, an intelligence business. That’s our vision.”

Exec Scott Guthrie took the stage to discuss the new support for in-memory advanced analytics and noted that for these kinds of workloads, data pros can use the R programming language, which he described as the leading open source data science language in the industry. Coincidentally, Microsoft yesterday announced R Tools for Visual Studio for machine learning scenarios.

SQL Server on Linux
Providing one of the few real news announcements during the presentation, Guthrie also noted that a private preview of SQL Server on Linux is available today, following up onsurprising news earlier in the week that SQL Server was being ported to the open source Linux OS, which is expected to be completed in mid-2017. Guthrie said that unexpected move was part of the company’s strategy of bringing its products and services to a broader set of users and “to meet customers where they’re at.”

Another focus of the event was the new “Stretch Database” capability, exemplifying SQL Server 2016’s close connection to the Microsoft Azure cloud.

“SQL Server is also perhaps the world’s first cloud-bound database,” Sirosh said. “That means we build the features of SQL Server in the cloud first, ship them with Azure SQL DB, and customers have been experiencing it for six to nine months and a very large number of queries have been run against them.”

Sirosh expounded more on this notion in a companion blog post published during the event. “We built SQL Server 2016 for this new world, and to help businesses get ahead of today’s disruptions,” he said. “It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always encrypted query processing capabilities and in-memory transactions with persistence. It is also perhaps the world’s only relational database to be ‘born cloud-first,’ with the majority of features first deployed and tested in Azure, across 22 global datacenters and billions of requests per day. It is customer tested and battle ready.”

Stretch Database
Features shipped with SQL server, Sirosh said, “allow you to have wonderful hybrid capabilities, allowing your workload to span both on-premises and the cloud. So Strech Database is one of them. Data in a SQL Server, cold data, can be seamlessly migrated into databases in the cloud. So you have in effect a database of very large capacity, but it’s always queryable. It’s not just a backup. That data’s that’s migrated over is living in a database in the cloud, and when you issue queries to the on-premises database, that query is just transported to the cloud and the data comes back — perhaps a little slower, but all your data is still queryable.”

The new capabilities for querying data of all kinds in various stages and forms were a focal point for Sirosh.

“We have brought the ability to analyze data at incredible speed into the transactional database so you can do not only mission-critical transactional processing, but mission-critical analytic processing as well,” Sirosh said. “It is the database for building mission-critical intelligent applications without extracting and moving the data, and all the slowness that comes with doing so. So you can now build real-time applications that have sophisticated analytical intelligence behind them. That is the one thing that I would love all of you to take away from this presentation.”

 On-Demand Videos for More
At the Data Driven Web site, Microsoft has provided a comprehensive series of videos that explore various separate aspects of SQL Server, with topics ranging from “AlwaysOn Availability Groups enhancements in SQL Server 2016” to others on R services, in-memory OLTP, PolyBase, the Stretch Database, Always Encrypted and many more.

Still some attendees — virtual or otherwise — were disappointed by the lack of real significant news.

“Did this whole thing just finish without so much as a release date?” asked one viewer in a Tweet. “Sigh.”

 

 

Source : https://adtmag.com/Articles/2016/03/10/sql-server-2016.aspx

 

You, your network and the Locky virus

Last Monday, a new particularly clever (and nasty) piece ransomware appeared on the internet called Locky.

The malicious file went undetected by most anti-virus software for a number of days and even now a couple weeks since it appeared, antivirus products are still struggling to keep up, often taking upto 24 hours to include detection in their definition packages for each new daily iteration version of the virus.

This clearly has left users and company network exposed.

How it works :

It is initially spread through a Word doc embedded in an email. He is an example of one of those emails:

Attached to this email is a Word document containing an alleged Invoice.

If Office macros are enabled on this document – it unleashes an executable called :  ‘ladybi.exe’

This loads itself into memory then deletes itself. Whilst resident in memory – it encrypts your documents as hash.locky files, changes the desktop wallpaper, creates a .bmp file and opens it, creates a .txt file and opens it, and delete VSS snapshots. It can also reach out and encrypted files on your company network!

Once the files are encrypted, a ransom demand appears on the PC directing the user towards the the ‘Deep Web‘ to make a payment in Bitcoin to get your files decrypted.

Recovery

To recover your files you need to rely on you backups. It is thought unlikely that any kind of tool will become available to break the encryption algorithms. We do not recommend paying ransoms.

Identifying infected network users

If you see .locky extension files appearing on your network shares, look up the file owner on _Locky_recover_instructions.txt file in each folder. This will tell you the infected user. Lock their AD user and computer account immediately and boot them off the network — you will likely have to rebuild their PC from scratch.

Prevention

User education – do not open emails from unknown sources!

Disable Macro’s in office documents – this can be done on a network level via Group Policy 

Global spread

The deployment of Locky was a masterpiece of criminality — the infrastructure is highly developed, it was tested in the wild initially on a small scale (ransomware beta testing, basically), and the ransomware is translated into many languages. In short, this was well planned and expertly executed.

 

One hour of infection stats

Measuring the impact

Locky contains code to spread across network drives, allowing the potential to impact large enterprises outside of individual desktops.

Twitter impressions of over half a million this week from talking about this. It is thought many organisations are simply paying for the decrypter, which is basically paying your hostage takers for freedom. It’s also worth noting that many of the IP addresses getting hit by this are associated with addresses at large companies, many in the US; this clearly caught people out.

Sources:

https://medium.com
http://www.idigitaltimes.com

Microsoft Is Killing Support for Internet Explorer 8, 9 and 10 On January 12th

Microsoft is ending the support for Internet Explorer 8,9, and 10 on January 12th. This news has come as a breath of fresh air as it was considered a bane for many web developers, thanks to the endless security holes in the software.

On Tuesday, a new “End of Life” patch will go live that will ping the Internet Explorer users asking them to upgrade their browsers. This End of Life patch will mean that these older Internet Explorer versions will no longer get regular technical support and security fixes.

This step also means that Internet Explorer 11 is the last version of Microsoft’s vintage browser that’ll be supported. This patch will be delivered as a cumulative security update for these versions:

On Windows 7 Service Pack 1 and Windows 7 Service Pack 1 x64 Edition

  • Internet Explorer 10
  • Internet Explorer 9
  • Internet Explorer 8

On Windows Server 2008 R2 Service Pack 1 and Windows Server 2008 R2 Service Pack 1 x64 Edition

  • Internet Explorer 10
  • Internet Explorer 9
  • Internet Explorer 8

However, if you want to disable this update notification, follow these steps mentioned on Microsoft’s support page.

It’s expected that millions of users will choose to avoid these upgrade notifications, and thus will be prone to security risks. So, you are advisable to either upgrade your browsers, or switch to another web browser altogether.

Wireless Myths

Myth #1: “The only interference problems are from other 802.11 networks.”

Summary: The unlicensed band is an experiment by the FCC in unregulated spectrum sharing. The experiment has been a great success so far, but there are significant challenges posed by RF interference that need to be given proper attention.

 

Myth #2: “My network seems to be working, so interference must not be a problem.”

Summary: Interference is out there. It’s just a silent killer thus far.

 

Myth #3: “I did an RF sweep before deployment. So I found all the interference sources.”

Summary: You can’t sweep away the interference problem. Microwave ovens, cordless phones, Bluetooth devices, wireless video cameras, outdoor microwave links, wireless game controllers, Zigbee devices, fluorescent lights, WiMAX devices, and even bad electrical connections-all these things can cause broad RF spectrum emissions. These non-802.11 types of interference typically don’t work cooperatively with 802.11 devices.

 

Myth #4: “My infrastructure equipment automatically detects interference.”

Summary: Simple, automated-response-to-interference products are helpful, but they aren’t a substitute for understanding of the underlying problem.

 

Myth #5: “I can overcome interference by having a high density of access points.”

Summary: It’s reasonable to over-design your network for capacity, but a high density of access points is no panacea for interference.

 

Myth #6: “I can analyze interference problems with my packet sniffer.”

Summary: You need the right tool for analyzing interference. In the end, it’s critical that you be able to analyze the source of interference in order to determine the best course of action to handle the interference. In many cases, the best action will be removing the device from the premises.

 

Myth #7: “I have a wireless policy that doesn’t allow interfering devices into the premises.”

Summary: You have to expect that interfering devices will sneak onto your premises.

 

Myth #8: “There is no interference at 5 GHz.”

Summary: You can run, but you can’t hide.

 

Myth #9: “I’ll hire a consultant to solve any interference problems I run into.”

 

Summary: You can’t afford to rely on a third party to debug your network.

 

Myth #10: “I give up. RF is impossible to understand.”

Summary: The cavalry is here!

 

Myth #11: “Wi-Fi interference doesn’t happen very often.”

 

Summary: There’s no point burying your head in the sand: Wi-Fi interference happens.

 

Myth #12: “I should look for interference only after ruling out other problem sources.”

Summary: Avoid wasting your time. Fix your RF physical layer first.

 

Myth #13: “There’s nothing I can do about interference if I find it.”

Summary: There’s always a cure for interference, but you need to know what’s ailing you.

 

Myth #14: “There are just a few easy-to-find devices that can interfere with my Wi-Fi.”

Summary: You need the right tool to find interference fast, and it’s not a magnifying glass.

 

Myth #15: “When interference occurs, the impact on data is typically minor.”

Summary: Interference can really take the zip out of your Wi-Fi data throughput.

 

Myth #16 “Voice data rates are low, so the impact of interference on voice over Wi-Fi should be minimal.”

Summary: Can you hear me now? Voice over Wi-Fi and interference don’t mix.

 

Myth #17: “Interference is a performance problem, but not a security risk.”

Summary: RF security doesn’t stop with Wi-Fi. Do you know who is using your spectrum?

 

Myth #18: “802.11n and antenna systems will work around any interference issues.”

Summary: Antennas are a pain reliever, but far from a cure.

 

Myth #19: “My site survey tool can be used to find interference problems.”

Summary: Site survey tools measure coverage, but don’t solve your RF needs.

 

Windows 10 Major Update Highlights

  • Windows Update for Business enables control over the deployment of updates within organizations while ensuring devices are kept current and security needs are met, at reduced management cost. Features include setting up device groups with staggered deployments and scaling deployments with network optimizations.
  • Windows Store for Business provides a flexible way to find, acquire, manage and distribute both Windows Store apps and custom line of business apps to Windows 10 devices. Organizations can choose their preferred distribution method by directly assigning apps, publishing apps to a private store, or connecting with management solutions.
  • Mobile Device Management gives IT access to the full power of Enterprise Mobility Management to manage the entire family of Windows devices, including PCs, tablets, phones, and IOT. Windows 10 is the only platform that can manage BYOD scenarios from the device to the apps to the data on those devices – safely and securely. And of course, Windows 10 is fully compatible with the existing management infrastructure used with PCs, giving IT control over how they bridge between two capabilities.
  • Azure Active Directory Join allows IT to maintain one directory, enabling people to have one login and securely roam their Windows settings and data across all of their Windows 10 devices. AAD Join also enables any machine to become enterprise-ready with a few simple clicks by anyone in the organization.

Windows 10 Upgrade Path

Now that Windows 10 Version 1511 (first major patch) is out, we can look at potential upgrade paths for the OS.  For those of you that didn’t know, this version allows for the use of keys from Windows 7/8 during the installation of Windows 10.

Win10Upgrade

 

Veeam v9 New Features

From: http://blog.mwpreston.net/2015/11/09/veeam-v9-what-we-know-so-far/

Unlimited Scale-out Backup Repository

This is perhaps one the biggest features included within v9 – all to often we see environments over provision the storage for their backup repositories – you never know when we might get a large delta or incremental and the last thing we want to have to do is go through the process of running out of space and having to provision more.  In the end we are left with a ton of unused and wasted capacity, and when we need more instead of utilizing what we have we simply buy more – not efficient in terms of capacity and budget management.  This is a problem that Veeam is looking to solve in v9 with their Unlimited Scale-out Backup Repository functionality.  In a nutshell the scale-out backup repo will take all of those individual backup repositories you have now and group them into a single entity or pool of storage.  From there, we can simply select this global pool of storage as our target rather than an individual repository.  Veeam can then chose the best location to place your backup files within the pool depending on the functionalities and user-defined roles each member of the pool is assigned.  In an essence it’s a software defined storage play, only targeted at backup repositories – gone are the days of worrying about which repository to assign to which job – everybody in the pool!

More Snapshot/Repository integration.

Backup and restore from storage snapshots is no doubt a more efficient way to process your backups.  Just as Veeam has added support for HP 3PAR/StorVirtual and NetApp, we are now seeing EMC Dell thrown into that mix.  As of v9 we will now be able to leverage storage snapshots on EMC VNX/VNXe arrays to process our backup and restores directly from Veeam Backup and Replication – minimizing impact on our production storage and allowing us to keep more restore points, processing them faster and truly providing us with the ability to have < 15 minutes RTPO.

On the repository end of things we’ve seen the integration provided for DataDomain and Exagrid – as of v9 we can throw HP StoreOnce Catalyst into that mix. Having a tighter integration between Veeam and the StoreOnce deduplication appliance provides a number of enhancements in terms of performance to your backups and restores.  First off you will see efficiencies in copying data over slower links due to the source side deduplication that StoreOnce provides.  StoreOnce can also create synthetic full backups by performing only meta data operations, eliminating the need to actual perform a copy of the data during the synthetic creation, which in turns provides efficiency to a very high i/o intensive operation.  And of course, creating repositories for Veeam backups on the StoreOnce Catalyst can be done directly from within Veeam Backup & Replication, without the need to jump into separate management tools or UIs.

Cloud connect replication

Last year Veeam announced the Cloud Connect program which essentially allows partners to become somewhat of a service provider for their customers looking to ship their Veeam backups offsite.  Well, it’s 2015 now and we now can see that the same type of cloud connect technology now is available for replication.  Shipping backups offsite was a great feature, but honestly, being able to provide customers with a simple way to replicate their VMs offsite is ground breaking.  Disaster Recovery is a process and technology that is simply out of reach for a lot of business – there isn’t the budget set aside for a secondary site, let alone extra hardware sitting at that site essentially doing nothing.  Now customers are able to simply leverage a Veeam Cloud/Service Provider and replicate their VMs on a subscription based process to their data center.

DirectNFS

When VMware introduced the VMware API’s for Data Protection (VADP) it was ground breaking in what it provided vendors such as Veeam the ability to do in terms of backup  VADP is the grounds to how Veeam accesses data in their Direct SAN transport mode, allowing data to be simply transferred directly from the SAN to the Veeam Backup and Replication console.  That said VADP is only supported on block transports, limiting Direct SAN to just iSCSI and Fibre Channel.  In true Veeam fashion when they see an opportunity to innovate and develop functionality where it may be lacking they do so.  As of v9 we will now be able to leverage Direct SAN mode on our NFS arrays using a technology called DirectNFS.  DirectNFS will allow the VBR console server to directly mount to our NFS exports, allowing Veeam to process the data directly from the SAN, leaving the ESXi hosts to do what they do best – run production!

On-Demand Sandbox for Storage Snapshots

The opportunities that vPower and Virtual Labs have brought to organizations has been endless. Having the ability to spin up exact duplicates of our production environments, running them directly from our deduplicated backup files has solved many issues around patch testing, application upgrades, etc.  That said up until now we could only use backup files as the grounds for getting access to these VMs – starting with v9 we can now leverage storage snapshots on supported arrays (HP, EMC, NetApp) to create completely isolated copies of the data that resides on them.  This is huge for those organizations that leverage Virtual Labs frequently to perform testing of code or training.  Instead of waiting for backups to occur we could technically have a completely isolated testing sandbox spun up using Storage Snapshots in essentially, minutes.  A very awesome feature in my opinion.

ROBO Enhancements

Those customers who currently use Veeam and have multiple locations we will be happy to hear about some of the enhancements that v9 has centering around Remote/Branch Offices.  A typical configuration in deploying Veeam is to have a centralized console controlling the backups at all of our remote locations.  In v8, even if you had a remote proxy and repository located at the remote office, all the guest interaction traffic was forced to traverse your WAN as it was communicated directly from the centralized console.  In v9 things have changed – a new Guest Interaction Proxy can be deployed which will handle then handle this type of traffic.  When placed at the remote location, only simple commands will be sent across the WAN from the centralized console to the new GIP, which will in turn facilitate the backup of the remote VMs, thus saving on bandwidth and providing more room for, oh, I don’t know, this little thing called production.

When it comes to recovery things have also drastically changed.  In v8 when we performed a file-level recovery the data actually had to traverse our WAN twice – once when the centralized backup console pulled the data, then again as it pushed it back out to it’s remote target – not ideal by any means.  Within v9 we can now designate and remote Windows server as a mount server for that remote location – when a File-level recovery is initiated the Mount Server can now handle the processing of the files rather than the backup console, saving again on bandwidth and time.

Standalone Console

“Veeam Backup & Replication console is already running”  <- Any true Veeam end-user is sure to have seen this message at one time or another, forcing us to either find and kill the process or yell at someone to log off.  As of v9 the Veeam Backup & Replication console has now been broken out from the Veeam Backup & Replication server, meaning we can install a client on our laptops in order to access Veeam.  This is not a technical change in nature, but honestly this is one of my favorite v9 features.  I have a lot of VBR consoles and am just sick of having all those RDP sessions open – this alone is enough to force me to upgrade to VBR v9 .

Per-VM backup files

The way Veeam is storing our backup files is coming with another option in version 9.  Instead of having one large backup file that contains multiple VMs we can now enable what is called a “Per-VM backup file chain” option.  What this does store each VMs restore points within the job in their own dedicated backup file.  Some advantages to this?  Think about writing multiple streams inside of parallel processing mode into our repositories – this technically should increase the performance of our backup jobs.  Certainly this sounds like an option you may only want to use if your repository provides deduplication as you would lose the deduplication provided job-wide by Veeam if you have enabled this.

New and improved Explorers

The Veeam Explorers are awesome, allowing us to restore individual application objects from our backup files depending on what application is inside it.  Well, with v9 we have one new explorer as well as some great improvements to the existing ones.

  • Veeam Explorer for Oracle – new in v9 is the explorer functionality for Oracle.  Transaction-level recovery and transaction log backup and replay are just a couple of the innovative features that we can no perform on our Oracle databases.
  • Veeam Explorer for MS Exchange – We can now get a detailed export report which will outline exactly what has been exported from our Exchange servers – great for auditing and reporting purposes for sure!  Another small but great feature – Veeam will no provide us with an estimation in terms of export size for the data contained in our search queries.  At least we will have some idea as to how long it might take.
  • Veeam Explorer for Active Directory – Aside from Users and Groups and the normal objects in AD we might want to restore we can now process GPO’s and AD-Integrated DNS Records).  Oh, and if you know what you are doing Veeam v9 can also restore configuration partition objects (I’ll stay away from this one)
  • Veeam Explorer for MS SQL – One big item that has been missing from the SQL explorer has been table-level recovery – in v9 this is now possible.  Also in v9 is the ability to process even more SQL objects such as Stored Procedures, functions and views as well as utilize a remote SQL server as a staging server for the restore.
  • Veeam Explorer for SharePoint – As much as I hate it SharePoint is still widely used, therefore we are still seeing development within Veeam on their explorer.  In v9 we can process and restore full sites as well as site-collections.  Also, list and item-level permissions are now possible to restore as well.

Another layer of protection: Cryptolocker and other malware

Preventative Workstation protection:

This virus launches from a specific location on the workstation, thus it’s recommended to add a group policy setting to block it from Windows Vista/7/8 and from XP.

Use software restriction policies as follows:

Windows 7:

You can use Software Restriction Policies to block executables from running when they are located in the %AppData% folder, or any other folder. File paths of the infection are: C:\Users\User\AppData\Roaming\{213D7F33-4942-1C20-3D56=8-1A0B31CDFFF3}.exe (Vista/7/8)

Office 2013 Activation error of death solved!

O365 Office 2013 Activation error code 0x8004FC12

This is something that has been annoying me for a while.  It only happens on my home computer and will not go away.  I’ve tried reinstalling setting up new profiles, un-associating my personal O365 account, repairing Office.  I even gave up and started using Office 2010.

The problem doesn’t occur on any of my other Windows 10 machines, yet a search on the Internet shows I’m not alone.  All the forums show frustrated people trying everything, only to end up being told to reinstall a clean copy of Windows 10 (uhh…no).

Luckily, on a tangent day, I decided to check up on the error messages.  To my surprise, I found a promising Microsoft article:

Are you ready for Windows 10?

Recently we started disabling the Windows 10 pop-ups for our MSP clients. We just feel that Windows 10 isn’t ready for the corporate environment. There are a few troubling things about it.

  • The interface. Most people can get used to it relatively quickly, but the desktop environment is more of a touch interface than prior versions.
  • Compatibly. A few days ago I saw a statement from our bank saying not to install Windows 10 for use with their software and products. This totally made sense, as from past experience getting banking and payroll software to work is very tricky.

Home users appear to be enjoying Windows 10, but they aren’t worried about making money based on their computer working. Check back soon for more to come on this topic!