Posts

VMware vCenter Converter

VMware vCenter Converter : Tips and Best Practices

Vmware vCenter converter can convert Windows and Linux based physical machine and Microsoft hyper-v systems into Vmware virtual machines.

Here are some tips and suggested best practices

Tasks to perform before conversion :

  • Make sure you know the local Administrator password! If the computer account gets locked out of the domain – you are likely going to need to login locally to recover
  • Ensure you are using the latest version of Vmware vCenter converter.
  • If possible, install Vmware vCenter Converter locally on the source (physical machine) operating system.
  • Make a note of the source machine IP addresses. The conversion will create a new NIC and having those IP details handy will help.
  • Disable any anti-virus
  • Disable SSL encryption – this should speed up the conversion ( described here )
  • If you have stopped and disabled any services – make sure to take a note of their state beforehand. A simple screenshot goes a long way here!
  • If converting from hyper-v -> vmware. Install the Converter on the host and power down the converter before starting the conversion.
  • Uninstall any hardware specific software utilies from the source server
  • If the source system has any redundant NICs – I would suggest removing them in the Edit screen on the converter ui.
  • For existing NICs – use the VMXNET3 driver and set it to not connected.

Special considerations for Domain Controllers, MS exchange and SQL servers.

Although – You tend to get warned off converting Domain controllers, they do work OK if you take some sensible precautions:

  • Move FSMO roles to Another Domain Controller
  • Make another Domain Controller PDC
  • Stop Active Directory services
  • Stop DHCP service ( if applicable )
  • Stop DNS service ( if applicable )

For SQL and Exchange, you should stop and disable all Exchange and SQL services on the source machine and only start them back up on the target VM once you are happy the server is successfully back on the domain.

( note these steps are not necessary for V2V conversations and you should have the system powered off!)

________________________________________________

Tasks to perform after conversion :

  • Once the conversion has successfully completed, get the source physical machine off the network. You can disable the NIC, pull the cable and/or power it down. It should not come up again.
  • For V2V conversion, delete the NIC from the systems hardware properties completely.
  • Once the physical machine is off the network, bring the virtual machine up (ensure network is not connected initially )
  • Install VMwares and set the ip config ( that you noted during the pre-conversion steps )
  • Shutdown and connect the network and bring your Virtual system back up
  • Uninstall VMware vCenter Converter from the newly converted Virtual macine

Special considerations for Domain Controllers, MS exchange and SQL servers.

  • Create test user on DC and ensure he gets replicated to the other ones.
  • Delete this test and ensure that gets replicated
  • Create test GPO policy and ensure it replicates across all domain controllers
  • Check system, application and importantly the File Replication Service logs to ensure that their is no issues with replication.

 

For SQL and Exchange : double check that their is no trust issues on the virtual machine. Try connecting to the ADMIN$ share from multiple locations. If you do find the computer account locked out. Taking the machine in and out of the domain normally fixes it.

Once happy the machine is on your domain without any trust issues – restart and reconfigure the SQL/Exchange services as per how they originally were.

 

Windows Server 2016

Windows Server 2016 docs are now on docs.microsoft.com

Microsoft have recently announced that their IT pro technical documentation for Windows Server 2016 and Windows 10 and Windows 10 Mobile is now available at docs.microsoft.com.

docs.microsoft.com

Why move to docs.microsoft.com?

Well here microsoft promise:

“a crisp new responsive design that looks fantastic on your phone, tablet, and PC. But, more importantly, you’ll see new ways to engage with Microsoft and contribute to the larger IT pro community. From the ground up, docs.microsoft.com that offers:

  • A more modern, community-oriented experience that’s open to your direct contribution and feedback.
  • Improved content discoverability and navigation, getting you to the content you need – fast.
  • In article Comments and inline feedback.
  • Downloadable PDF versions of key IT pro content collections and scenarios. To see this in action, browse to the recently released Performance Tuning Guidelines for Windows Server 2016 articles, and click Download PDF.
  • Active and ongoing site improvements, including new features, based on your direct feedback. Check out the November 2016 platform update post to see the latest features on docs.microsoft.com.”

How to contribute to IT pro content

Microsoft recognize that customers are eager to share best practices, optimizations, and samples with the larger IT pro community. Docs.microsoft.com makes contribution easy.

Community contributions are open for your contribution. Learn more about editing an existing IT pro article.

Windows 10

server RAM

Choose the best server RAM configuration

Watch your machine memory configurations – always be care to implement the best server RAM configuration! You can’t just throw RAM at a physical server and expect it to work the best it possibly can. Depending on your DIMM configuration, you might unwittingly slow down your memory speed, which will ultimately slow down your application servers. This speed decrease is virtually undetectable at the OS level – but – anything that leverages lots of RAM to function, including application servers such as a database server, can take a substantial performance hit on performance.

An example of this is if you wish to configure 384GB of RAM on a new server. The server has 24 memory slots. You could populate each of the memory slots with 16GB sticks of memory to get to the 384GB total. Or, you could spend a bit more money to buy 32GB sticks of memory and only fill up half of the memory slots. Your outcome is the same amount of RAM. Your price tag on the memory is slightly higher than the relatively cheaper smaller sticks.

In this configuration, your 16GB DIMM configuration runs the memory 22% slower than if you buy the higher density sticks. The fully populated 16GB stick configuration runs the memory at 1866 MHz. If you only fill in the 32GB sticks on half the slots, the memory runs at 2400 MHz.

Database servers, both physical and virtual, use memory as an I/O cache, improving the performance of the database engine by reducing the dependency on slower storage and leveraging the speed of RAM to boost performance. If you are wanting to know how to quickly setup your own web server, then click the link above you will be able to create your own server very easily. If the memory is slower, your databases will perform worse. Validate your memory speed on your servers, both now and for upcoming hardware purchases. Ensure that your memory configuration yields the fastest possible performance – implement the best server RAM configuration -your applications will be better for it!

Linux Patch Management

The Importance of Linux Patch Management

In recent news there have been a number of serious vulnerabilities found in various Linux systems. Whilst OS vulnerabilities are a common occurrence, it’s the nature of these that have garnered so much interest. Linux patch management should be considered a priority in ensuring the security of your systems.

The open-source Linux operating system is used by most of the servers on the internet as well as in smartphones, with an ever-growing desktop user base as well.

Open-source software is typically considered to increase the security of an operating system, since anyone can read, re-use and suggest modifications to the source code – part of the idea being that many people involved would increase the chances of someone finding and hopefully fixing any bugs.

With that in mind let’s turn our sights on the bug known as Dirty Cow (CVE-2016-5195) found in October – named as such since it exploits a mechanism called “copy-on-write” and falls within the class of vulnerabilities known as privilege escalation. This would allow an attacker to effectively take control of the system.

What makes this particular vulnerability so concerning however isn’t the fact that it’s a privilege escalation bug, but rather that it was introduced into the kernel around nine years ago. Exploits already taking advantage of Dirty Cow were also found after the discovery of the bug by Phil Oester. This means that a reliable means of exploitation is readily available, and due to its age, it will be applicable to millions of systems.

Whilst Red Hat, Debian and Ubuntu have already released patches, millions of other devices are still vulnerable – worse still is the fact that between embedded versions of the operating and older Android devices, there are difficulties in applying the updates, or they may not receive any at all, leaving them vulnerable.

Next, let’s have a look at a more recent vulnerability which was found in Cryptsetup (CVE-2016-4484), which is used to set up encrypted partitions on Linux using LUKS (Linux Unified Key Setup). It allows an attacker to obtain a root initramfs shell on affected systems. At this point, depending on the system in question, it could be used for a number of exploitation strategies according to the researchers whom discovered the bug, namely:

  • Privilege escalation: if the boot partition is not encrypted:
    — It can be used to store an executable file with the bit “SetUID” enabled. Which can later be used to escalate privileges by a local user.
    — If the boot is not secured, then it would be possible to replace the kernel and the initrd image.
  • Information disclosure: It is possible to access all the disks. Although the system partition is encrypted it can be copied to an external device, where it can be later be brute forced. Obviously, it is possible to access to non-encrypted information in other devices.
  • Denial of service: The attacker can delete the information on all the disks, causing downtime of the system in question.

Whilst many believe the severity and/or likely impact of this vulnerability has been exaggerated considering you need physical or remote console access (which many cloud platforms provide these days), what makes it so interesting is just how it is exploited.

All you need to do is repeatedly hit the Enter key at the LUKS password prompt until a shell appears (approximately 70 seconds later) – the vulnerability is as a result of incorrect handling of password retries once the user exceeds the maximum number (by default 3).

The researchers also made several notes regarding physical access and explained why this and similar vulnerabilities remain of concern. It’s generally accepted that once an attacker has physical access to a computer, it’s pwned. However, they highlighted that with the use of technology today, there are many levels of what can be referred to as physical access, namely:

  • Access to components within a computer – where an attacker can remove/replace/insert anything including disks, RAM etc. like your own computer
  • Access to all interfaces – where an attacker can plug in any devices including USB, Ethernet, Firewire etc. such as computers used in public facilities like libraries and internet cafes.
  • Access to front interfaces – usually USB and the keyboard, such as systems used to print photos.
  • Access to a limited keyboard or other interface – like a smart doorbell, alarm, fridge, ATM etc.

Their point is that the risks are not limited to traditional computer systems, and that the growing trends around IoT devices will increase the potential reach of similar attacks – look no further than our last article on DDoS attacks since IoT devices like printers, IP cameras and routers have been used for some of the largest DDoS attacks ever recorded.

This brings us back around to the fact that now, more than ever, it’s of critical importance that you keep an eye on your systems and ensure any vulnerabilities are patched accordingly, and more importantly – in a timely manner. Linux patch management should be a core consideration for all IT systems, whether they are servers or workstations, and of course regardless of the operating systems used.

This article was provided by our service partner ESET

Windows Server 2016

The next version of windows server is here and its packed with a lineup of great new features. From software-defined storage, network improvements and Docker-driven containers.

True to type with the new version of Windows Server 2016, we are presented with a multitude of new features. Added networking and storage capabilities build on the software defined infrastructure which began its initiation in Windows Server 2012. Microsoft’s focus on the cloud is apparent with capabilities such as containers and Nano Server. Security is still priority with the shielded VMs features.

 Docker- Driven Containers

 Microsoft has worked together with Docker to bring full support for the Docker ecosystem to Windows Server 2016. Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment. Containers represent a huge step for Microsoft as it embraces the open source world. You install support for Containers using the standard method to enable Windows features through Control Panel or via the PowerShell command:

Install-WindowsFeature containers

You must also download and install the Docker engine to get all of the Docker utilities. This line of PowerShell will download a Zip file with everything you need to install Docker on Windows Server 2016:

Invoke-WebRequest “https://get.docker.com/builds/Windows/x86_64/docker-1.12.1.zip” -OutFile “$env:TEMP\docker-1.12.1.zip” -UseBasicParsing

Full documentation for getting started with containers can be found on the Microsoft MSDN website. New PowerShell cmdlets provide an alternative to Docker commands to manage your containers (see Figure 1).

pwrshell

Figure 1: You can manage both Windows Server Containers and Hyper-V Containers through native Docker commands or through PowerShell (shown).

It’s important to note that Microsoft supports two different container models: Windows Server Containers and Hyper-V Containers. Windows Server Containers are based on the typical Docker concepts, running each container as an application on top of the host OS. On an opposite note, Hyper-V Containers are completely isolated virtual machines, incorporating their own copy of the Windows kernel, but more lightweight than traditional VMs.

Windows containers are built against a specific operating system and are crosscomplied with Linux to provide the same experience and common Docker engine. For you, this means that Windows containers supports the Docker experience including the Docker command structure, Docker repositories, Docker datacenter and Orchestration. In addition, Windows containers extends the Docker Community to provide Windows innovations such as PowerShell to manage Windows or Linux containers.

Nano Server

Nano Server is another key component of Microsoft’s strategy to be highly competitive in the private cloud market. Nano Server is stripped-down version of Windows Server 2016. It’s so stripped down, in fact, that it doesn’t have any direct user interface besides the new Emergency Management console. You will manage your Nano instances remotely using either Windows PowerShell or the new Remote Server Administration Tools. The first benefit is Infrastructure host, that can runs Hyper-V, File Server, Failover Clustering and it will be a great container host as well.

Figure 2: Nano Server not only boots faster, it consumes less memory and less disk than any other version of Windows Server.

Figure 2: Nano Server not only boots faster, it consumes less memory and less disk than any other version of Windows Server.

 

Storage Qos Updates

 

Storage QoS enables administrators to provide virtual machines, and their applications by extension, predictable performance to an organization’s networked storage resources. Storage QoS helps level the playing field while virtual machines jockey for storage resources. According to a related Microsoft support document, the feature helps reduce “noisy neighbor” issues caused by resource-intensive virtual machines. “By default, Storage QoS ensures that a single virtual machine cannot consume all storage resources and starve other virtual machines of storage bandwidth,” stated the company.

It also offers administrators the confidence to load up on virtual machines by providing better visibility into their virtual machine storage setups. “Storage QoS policies define performance minimums and maximums for virtual machines and ensures that they are met. This provides consistent performance to virtual machines, even in dense and overprovisioned environments,” Microsoft wrote.

Windows Server 2016 allows you to centrally manage Storage QoS policies for groups of virtual machines and enforce those policies at the cluster level. This could come into play in the case where multiple VMs make up a service and should be managed together. PowerShell cmdlets have been added in support of these new features, including Get-StorageQosFlow, which provides a number of options to monitor the performance related to Storage QoS; Get-StorageQosPolicy, which will retrieve the current policy settings; and New-StorageQosPolicy, which creates a new policy.

 

Shielded VMs

 Shielded VMs, or Shielded Virtual Machines, are a security feature introduced in Windows Server 2016 for protecting Hyper-V Generation 2 virtual machines (VMs) from unauthorized access or manipulating. Shielded VMs use a centralized certificate store and VHD encryption to authorize the activation of a VM when it matches an entry on a list of permitted and verified images. VMs use a virtual TPM to enable the use of disk encryption with BitLocker. Live migrations and VM-state are also encrypted to prevent man-in-the-middle attacks.

The HGS – Host Guardian Service (HGS) (typically, a cluster of 3 nodes) supports two different attestation modes for a guarded fabric:

TPM-trusted attestation (Hardware based)

Admin-trusted attestation (AD based)

TPM-trusted attestation is recommended because it offers stronger assurances, as explained in the following table, but it requires that your Hyper-V hosts have TPM 2.0. If you currently do not have TPM 2.0, you can use Admin-trusted attestation. If you decide to move to TPM-trusted attestation when you acquire new hardware, you can switch the attestation mode on the Host Guardian Service with little or no interruption to your fabric.

Figure 3: Shielded VMs are encrypted at rest using BitLocker. They can be run by an authorized administrator only on known, secure, and healthy hosts.

Figure 3: Shielded VMs are encrypted at rest using BitLocker. They can be run by an authorized administrator only on known, secure, and healthy hosts.

Fast Hyper-V Storage with ReFS

The Resilient File System (ReFS) is another feature introduced with Windows Server 2012. ReFS has huge performance implications for Hyper-V. New virtual machines with a fixed-size VHDX are created instantly. The same advantages apply to creating checkpoint files and to merging VHDX files created when you make a backup. These capabilities resemble what Offload Data Transfers (ODX) can do on larger storage appliances.

RemoteFX

Microsoft also did some improvements on Windows Server 2016 RemoteFX which now includes support for OpenGL 4.4 and OpenCL 1.1 API. It also allows you to use larger dedicated VRAM and VRAM in now finally configurable.

Hyper-V rolling upgrades

Windows Server 2016 enables you to upgrade to a new operating system without taking down the cluster or migrating to new hardware. In previous versions of Windows Server, it was not possible to upgrade a cluster without downtime, this caused significant issues for production systems. This new process is is similar in that individual nodes in the cluster must have all active roles moved to another node in order to upgrade the host operating system. The difference is that all members of the cluster will continue to operate at the Windows Server 2012 R2 functional level (and support migrations between old and upgraded hosts) until all hosts are running the new operating system and you explicitly upgrade the cluster functional level (by issuing a PowerShell command).

Hyper-V hot add NICs and memory

Previous versions of Hyper-V did not allow you to add a network interface or more memory to a running virtual machine. Microsoft now allows you to make some critical machine configuration changes without taking the virtual machine offline. The two most important changes involve networking and memory.

In the Windows Server 2016 version of Hyper-V Manager, you’ll find that the Network Adapter entry in the Add Hardware dialog is no longer grayed out. The benefit is that an administrator may now add network adapters and memory to VMs originally configured with fixed amounts of memory, while the VM is running.

Storage Replica

Storage Replica is a new feature that enables storage-agnostic, block-level, synchronous replication between clusters or servers for disaster preparedness and recovery, as well as stretching of a failover cluster across sites for high availability. Synchronous replication enables mi Storage Space Direct (S2D), formally known as “Shared Nothing”.WS2016 introduces the second iteration of the software-defined storage feature known as Storage Spaces to bring cloud inspired capabilities to the data center with advances in computing, networking, storage, and security. This S2D local storage architecture takes each storage node and pools it together using Storage Spaces for data protection (two- or three-way mirroring as well as parity). The local storage can be SAS or SATA (SATA SSDs provide a significant cost savings) or NVMe for increased performance.

Enabling this feature can be accomplished with a single PowerShell command:

Enable-ClusterStorageSpacesDirect

This command will initiate a process that claims all available disk space on each node in the cluster, then enables caching, tiering, resiliency, and erasure coding across columns for one shared storage pool.

storing of data in physical sites with crash-consistent volumes, ensuring zero data loss at the file system level. Asynchronous replication allows site extension beyond metropolitan ranges.

 

Networking enhancements

Converged Network Interface Card (NIC). The converged NIC allows you to use a single network adapter for management, Remote Direct Memory Access (RDMA)-enabled storage, and tenant traffic. This reduces the capital expenditures that are associated with each server in your datacenter, because you need fewer network adapters to manage different types of traffic per server.

Another facility is Packet Direct. Packet Direct provides a high network traffic throughput and low-latency packet processing infrastructure.

Windows Server 2016 includes a new server role called Network Controller, which provides a central point for monitoring and managing network infrastructure and services. Other enhancements supporting the software-defined network capabilities include an L4 load balancer, enhanced gateways for connecting to Azure and other remote sites, and a converged network fabric supporting both RDMA and tenant traffic.

As we move to virtualized instances in the cloud, it becomes important to reduce the footprint of each instance, to increase the security around them, and to bring more automation to the mix. In Windows Server 2016, Microsoft is pushing ahead on all of these fronts at once. Windows Server 2016 makes it easier to pick up the cloud way of functioning so you can change the way your server apps work as quickly as you want, even if you’re not using the cloud.

 

Windows 10 Anniversary Update

Late last month, Microsoft announced a major update to Windows 10 would be made available on August 9th.

In a post on the Windows Experience Blog, Microsoft revealed a list of new features and security upgrades, improvements to Cortana and a set of features aimed at making the Windows 10 experience better on smartphones and tablets.

This news arrives almost exactly a yeat to the day of the consumer launch of Windows 10. The new operating system has seen massive adoption by both business and consumers users in the past year, and Microsoft hope these upgrades spur further adoption by any stragglers.

Security

  • Windows Hello will now have integration with biometrics.  This will allow users to embrace security without compromising convenience.
  • Improvements to Windows Defender (MS Antimalware software)
    • Windows Defender Advanced Threat Protection — cloud based antimalware software for enterprise
  • Windows Information Protection (more information here)

Cortana

This update will include updates to Cortana, the Microsoft virtual assistant, to hopefully make her more useful. The assistant is now available to take commands on users’ lock screens, so they can do things like ask questions and play music without having to unlock their devices.  Cortana can also remember things for users, such as their shopping lists or important to do item so that people do not have to refer to other platforms to retrieve them.

Windows Ink

Microsoft is also introducing new tools that make it easier to jot down notes using a touchscreen-enabled tablet or laptop. The Windows Ink features give users a virtual notepad to doodle, sketch or scribble down notes without having to wait for an app to launch.  Furthermore, key apps have new ink-specific features, like using handwriting in Office, ink annotations in Microsoft Edge or drawing custom routes in Maps.

Thats only to touch on a few of the key items in the update, there will be further secuirty enhancements and improved xbox integration. Microsoft Edge also received a handful of updates, including support for browser extensions which should make it more of a credible alternative to Chrome or Firefox.

Edge Browser

  • Battery usage efficiency gains — up to 3 hours compared to Google Chrome
  • Extensions available
  • Accessibility with HTML5, CSS3, Aria

Application Whitelisting Using Software Restriction Policies

Software Restriction Policies (SRP) allows administrators to manage what applications are permitted to run on Microsoft Windows. SRP is a Windows feature that can be configured as a local computer policy or as a domain policy through Group Policy with Windows Server 2003 domains and above. The use of SRP as a white-listing technique will increase the security feature of the domain by preventing malicious programs from running since the administrators can manage which software or applications are allowed to run on client PCs.

Blacklisting is a reactive technique that does not extend well to the increasing number and variety of malware. There have been many attacks that cannot be blocked by the blacklisting techniques since it uses undiscovered vulnerabilities known as zero-day vulnerabilities.

On the other hand, Application white-listing is a practical technique where only a limited number of programs are allowed to run and the rest of the programs are blocked by default. It makes it hard for attackers to get in to the network since it needs to exploit one of the allowed programs on the user’s computer or get around the white-listing mechanism to make a successful attack. This approach should not be seen as replacement standard security software such as anti virus or firewalls – it is best used in conjunction with these.

Since Microsoft Windows operating systems have SRP functionality built in, administrators can readily configure an application white-listing solution that only allows specific executable files to be run. Service Restriction Policies can also restrict which application libraries are permitted to be used by executable’s.

There are certain recommended SRP settings by NSA Information Assurance Directorate’s (IAD) Systems and Network Analysis Center (SNAC). It is advised to test any configuration changes on a test network or on a small set of test computers to make sure that the settings are correct before implementing the change on the whole domain.

There is known issues on certain Windows versions to consider: for example minor usability issue such as when double-clicking a document, it may not open the associated document viewer application, another is the software update method that allows users to manually apply patches may not function well once SRP is enforced. We may see these issues addressed with a hotfix provided by Microsoft. Automatic updates are not affected by SRP white-listing and will still function correctly. SRP settings should be tested thoroughly due to issues like this to prevent causing a widespread problem in your production environment.

The use of path-based SRP rules are recommended since it has shown unnoticeable performance impact on host after a good deal of testing. Other rules may provide greater security benefits than path-based rules but it has an increased impact on host performance. Other rules like file hash rules are more difficult to manage and needs constant updates each time any files are installed or updated, another is the certificate rules which is somehow limited since not all the applications’ files are digitally signed by their software publishers.

There are certain steps to follow in implementing SRP in Active Directory domain which can be done through the steps below:

1. Review the domain to find out which applications are operating on domain computers.

2. Configure SRP to work in white-listing approach.

3. Choose which applications must be permitted to run and make extra SRP rules as required.

4. Test the SRP rules and form additional rules as needed.

5. Install SRP to sequentially larger Organizational Units until SRP is functional to the entire network.

6. Observe SRP continuously and adjust the rules when needed.

SRP configuration as described above can drastically increase security stance of a domain while continuously letting users to run the applications they need to remain productive for their work.

OS X Server Caching

We’ve all been there: Apple releases a new iOS update and everyone is going ham. Pretty soon you have a few dozen employees leveraging the internet to get their latest fix. These updates aren’t small, and the impact they will have for all the other users isn’t small either. How do we allow users to update their devices without dragging the corporate network down?
By using a caching service. Store all of your updates and apps, IOS or Mac, on a local server and serve it up internally.
All Apple devices are built to search for a local server with the ‘Caching Service’ enabled before stepping outside the network. A device will only need to download from apple once before the caching service makes it available, locally, to all other requesters. No need to sweat the next iOS update.

Microsoft on Upcoming SQL Server 2016; Goes After Oracle

Data professionals might have been expecting a launch date for SQL Server 2016 at the Data Driven event held today in New York City, but what they got was a recap of the flagship database system’s capabilities and a full-out assault on rival Oracle Corp.

Exec Judson Althoff detailed a SQL Server 2016/Oracle comparison involving a scenario where various capabilities built into SQL Server 2016 were matched up against the Oracle database. “When we say everything’s built in, everything’s built in,” he said. When the built-in capabilities were pitted against similar functionality offered by Oracle products, “Oracle is nearly 12 times more expensive,” he said.

That specific scenario was envisioned with a project starting from scratch. Althoff said not everybody does that, as they have invested in “other technologies.”

Free Licenses for Oracle Switchers
“So if you are willing to migrate off of Oracle, we will actually give you free SQL Server licenses to do so,” Althoff said in his presentation. “For every instance of Oracle you have, free SQL Server licenses. All you have to do is have a Software Assurance agreement with Microsoft. If you’re willing to take this journey with us before the end of June, we’ll actually help and invest in the migration costs, put engineers on the ground to help you migrate off of Oracle.”

 He noted that in the wake of some newspaper ads about the offer, he received e-mails asking just who was eligible. “Everyone is eligible for this,” Althoff said. “We’re super excited to help you migrate off of Oracle technology, lower your overall data processing costs and actually really be enabled and empowered to build the data estate that we’ve been talking about.”

More details on the offer were unveiled in a ” Break free from Oracle ” page on the Microsoft site. “This offer includes support services to kick-start your migration, and access to our SQL Server Essentials for the Oracle Database Administrator training,” the site says. “Dive into key features of SQL Server through hands-on labs and instructor-led demos, and learn how to deploy your applications — on-premises or in the cloud.”

Microsoft also went after Oracle on the security front, citing information published by the National Institute of Standards and Technology that lists databases and their vulnerabilities. On average, over the past few years, exec Joseph Sirosh said in his presentation, SQL Server was found to have 1/10th the vulnerabilities of Oracle.

Always Encrypted
Sirosh also highlighted new security capabilities of SQL Server 2016. “In SQL Server 2016, for the first time, you will hear about a capability that we call Always Encrypted,” he said. “This is about securing data all the way from the client, into the database and keeping it secure even when query processing is being done. At the database site, the data is never decrypted, even in memory, and you can still do queries over it.”

He explained that data is encrypted at the client, and sent to the database in its encrypted form, in which it remains even during query processing. No one can decrypt credit card data, for example, while it’s in the database, not even a DBA. “That’s what you want,” Sirosh said of the functionality enabled by homomorphic encryption.

During today’s event, Microsoft CEO Satya Nadella and other presenters focused on a series of customer success videos and live presentations, reflecting Nadella’s belief that Microsoft “shouldn’t have launch events, but customer success events.”

Those success stories leveraged new ground-breaking capabilities of SQL Server 2016, including in-memory performance across all workloads, mission-critical high availability, business intelligence (BI) and advanced analytics tools.

“We are building this broad, deep, digital data platform,” Nadella said. “This platform is going to help every business become a software business, a data business, an intelligence business. That’s our vision.”

Exec Scott Guthrie took the stage to discuss the new support for in-memory advanced analytics and noted that for these kinds of workloads, data pros can use the R programming language, which he described as the leading open source data science language in the industry. Coincidentally, Microsoft yesterday announced R Tools for Visual Studio for machine learning scenarios.

SQL Server on Linux
Providing one of the few real news announcements during the presentation, Guthrie also noted that a private preview of SQL Server on Linux is available today, following up onsurprising news earlier in the week that SQL Server was being ported to the open source Linux OS, which is expected to be completed in mid-2017. Guthrie said that unexpected move was part of the company’s strategy of bringing its products and services to a broader set of users and “to meet customers where they’re at.”

Another focus of the event was the new “Stretch Database” capability, exemplifying SQL Server 2016’s close connection to the Microsoft Azure cloud.

“SQL Server is also perhaps the world’s first cloud-bound database,” Sirosh said. “That means we build the features of SQL Server in the cloud first, ship them with Azure SQL DB, and customers have been experiencing it for six to nine months and a very large number of queries have been run against them.”

Sirosh expounded more on this notion in a companion blog post published during the event. “We built SQL Server 2016 for this new world, and to help businesses get ahead of today’s disruptions,” he said. “It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always encrypted query processing capabilities and in-memory transactions with persistence. It is also perhaps the world’s only relational database to be ‘born cloud-first,’ with the majority of features first deployed and tested in Azure, across 22 global datacenters and billions of requests per day. It is customer tested and battle ready.”

Stretch Database
Features shipped with SQL server, Sirosh said, “allow you to have wonderful hybrid capabilities, allowing your workload to span both on-premises and the cloud. So Strech Database is one of them. Data in a SQL Server, cold data, can be seamlessly migrated into databases in the cloud. So you have in effect a database of very large capacity, but it’s always queryable. It’s not just a backup. That data’s that’s migrated over is living in a database in the cloud, and when you issue queries to the on-premises database, that query is just transported to the cloud and the data comes back — perhaps a little slower, but all your data is still queryable.”

The new capabilities for querying data of all kinds in various stages and forms were a focal point for Sirosh.

“We have brought the ability to analyze data at incredible speed into the transactional database so you can do not only mission-critical transactional processing, but mission-critical analytic processing as well,” Sirosh said. “It is the database for building mission-critical intelligent applications without extracting and moving the data, and all the slowness that comes with doing so. So you can now build real-time applications that have sophisticated analytical intelligence behind them. That is the one thing that I would love all of you to take away from this presentation.”

 On-Demand Videos for More
At the Data Driven Web site, Microsoft has provided a comprehensive series of videos that explore various separate aspects of SQL Server, with topics ranging from “AlwaysOn Availability Groups enhancements in SQL Server 2016” to others on R services, in-memory OLTP, PolyBase, the Stretch Database, Always Encrypted and many more.

Still some attendees — virtual or otherwise — were disappointed by the lack of real significant news.

“Did this whole thing just finish without so much as a release date?” asked one viewer in a Tweet. “Sigh.”

 

 

Source : https://adtmag.com/Articles/2016/03/10/sql-server-2016.aspx

 

Windows 10 Upgrade Path

Now that Windows 10 Version 1511 (first major patch) is out, we can look at potential upgrade paths for the OS.  For those of you that didn’t know, this version allows for the use of keys from Windows 7/8 during the installation of Windows 10.

Win10Upgrade