8 Trends Shaping Technical Support Services in 2016

Technical support is heavily dictated by technology. Numerous advances in technology have been rapid and have created various paradigm shifts. In recent years, technical support services have changed a lot to catch up with the ever-increasing and diversifying technology trends, not just for business clients, but also for consumers. From “bring your own device” (BYOD) to the use of smartphones and tablets, these massive changes in behavior have impacted the way technical support is delivered.

In 2016, these trends will continue to make life fascinating (and potentially troublesome) for tech support groups. You can expect the trends listed below to continue at least for the rest of the year.

1. Multi-Vendor Calls.

There was a time when a tech company usually only handled their own products though there were a few companies who had multi-vendor customer support as part of their service offerings. Today, whether supporting hardware or software, technical support personnel are expected to be able to assist in cross-platform issues—from supporting apps on Android, to issues with iPhones, as well as having knowledge of multiple operating systems like OS X and Linux.

2. BYOD Calls.

With business apps now available on smartphones, BYOD has become standard for the office. The prevalence of these devices has made it necessary that tech personnel understand the need for training on iOS and Android mobile devices.

3. Chat Support.

Phone support has always been the go-to for companies. However, the 1-800 number is no longer the front line for customer help services. Technical support services via chat, whether on the company web page, on Skype, or other chat and VOIP apps, have been increasing steadily.

4. Support for the Cloud.

Software as a Service (SaaS) and the subsequent Retail as a Service (RaaS) have proven that the cloud is the next great platform. With software and sales systems going to the cloud, support services have to follow. This is probably a good thing because cloud-based systems are more easily accessed and integrated.

5. Social Media as a Service channel.

People are utilizing social media to air their questions, regardless if it’s positive or negative feedback. Although it has been recommended that companies do not answer support calls on Facebook, there is simply no way to stem the tide. Solving issues correctly and quickly has become the only way to keep support issues from blowing up on social media, which can potentially be devastating for brand image. If a company has displays great service on social media, expect more people to give it likes on Facebook, Yelp and other social media sites.

6. First Aid Support with Siri.

Not just from Siri on Apple iOS, but also Google Now, Microsoft Cortana, and Amazon’s Echo. These talking digital assistants on mobile devices will continue to become more relevant. People have started talking to their smartphones, and the smartphones have been great at providing answers. Expect them to at the frontline of support very soon.

7. DIY Videos.

There are tons of do-it-yourself videos on YouTube. For some people, this is where they get their information on how to fix things quickly. Technical support services have not yet fully utilized this potential channel, but, they are getting there. Posting tutorials and other helpful videos on YouTube will continue to be a trend. These don’t usually cost too much, and they bring in viewership, which also helps create a larger SEO footprint for companies.

8. Customer Service as a Priority.

Customer support has been traditionally treated as a maintenance service, and as an afterthought to sales. Big IT corporations like IBM know that technical support is a profit center and that having better service results in more people buying a product. Great customer service is now becoming a selling point for products.

Effects and Taking Advantage of these Trends

The mentioned trends are just some of the major changes that directly affect the way technical support services are delivered.

The essence of these changes is immediate support over multiple channels. Each customer is unique, and this will be evident in the type of support they need. Companies who can address this issue or use these trends to their advantage will come out on top in providing the best possible support and assistance to their customers.

Technical support services have been stepping up to match the needs of their clients. With these continuing trends, the tools for better service have been upgraded—leading to better customer service.

 

 

reposted from : http://www.misgl.com/blog/8-trends-shaping-tech-support-in-2016/

802.11AC Wave2

When 802.11AC was introduced, we found it to be the most amazing thing since sliced bread.  It was dramatically faster than 802.11N, backwards compatible, and mostly interference free.  Today, we will discuss what 802.11AC-Wave2 brings to the table and how we feel about it.

802.11AC-Wave2 – Features and Enhancements

  • Supports speeds up to 2.34 Gbps (more spatial streams) 

Pros:  Compared to 802.11AC-Wave1, which has a capacity of  1.3 Gbps, the new standard has much higher capacity for throughput.  This is due to an increase in spatial streams.  With 802.11AC-Wave2, we increase from 3 spatial streams to 4.  This translates to a 33% increase in throughput.

Cons:  Similar to 802.11AC-Wave1, most client devices implement only one or two spatial streams in order to save on power and space required for additional antennas.  Addtionally, upgrades to current network switching infrastructure is required to fully take advantage of the ~2Gbps throughput.  A high signal to noise ratio and line of sight is usually also required.

  • Supports multiuser multiple input, multiple output (MU-MIMO)

Pros:  First, we must understand that with each additional spatial stream, we gain additional throughput.  Unfortunately, 802.11AC-Wave1 spatial streams are transmitted over multiple antennas to only ONE client at a time. Let’s imagine a freeway with multiple lanes, which can accommodate all sorts of cars, even the “wide load” ones that carry mobile homes and big construction equipment.  Now, imagine only 1 car can move at a time on this freeway.  Most cars (tablets, phones, etc) only require the use of a single lane, yet they have all the lanes available to them.  As you can see, this isn’t very efficient because the other lanes essentially goes to waste.  MU-MIMO fixes this. It allows each stream (lanes) to be directed to a different one-stream client simultaneously. So potentially, three clients get serviced in the time it previously took to service one.  Qualcomm claims a 2x-2.5x performance improvement.

Cons:  Unfortunately, this feature is only available in the downstream.  Using the car analogy, imagine the efficient freeway is only available for northbound, not southbound.  This also is a fairly complex and new technology, stability in a real-world environment has not be verified yet.

  • Offers the option of using 160-MHz-wide channels for greater performance

Pros:  Channel bonding is the single biggest performance multiplier, and it is the foundation for vendors’ claims of 1.3 Gbps speeds for Wave 1, and from 2.3 Gbps for Wave 2 up to 6.7 Gbps.  To accomplish this, 802.11AC-Wave1 allows FOUR 20 MHz channels to be bonded into a single 80 MHz channel. 802.11AC-Wave2 builds on this to provide up to 160 MHz (contiguous channels and a non-contiguous 80 + 80 configuration).

Cons:  With the availability of just a SINGLE contiguous 160MHz channel, this capability is more useful in a point-to-point configuration than a corporate wireless network.  Corporate networks require a dense configuration, which would cause performance degrading co-channel intereference.  In practice, this means that when a nearby cell is using the channel, it makes the channel busy for other nearby cells on the same channel. Additionally, nearby does not only mean neighboring cells, but due to the nature of the Wi-Fi channel access method CSMA/CS, it also means that cells at a 1-3 cell distance may keep the channel reserved.

So, with everything said and done, what is NetCal’s stance on 802.11AC-Wave2?  To keep things simple again, we recommend upgrading to 802.11AC-Wave1 90% of the time to save money on client device, equipment and infrastructure upgrades costs.  As you can guess from the above information, the improvements can only be seen in very limited circumstances (backhauling, standalone, close range, line of sight, mesh nodes, point-to-point/point-to-multipoint bridges).

The Rise in Crypto Ransomware

In recent years, we have seen a significant growth in Malware.  With enablers such as Bitcoin, RSA 2048-bit encryption, and the TOR network, NetCal predicts there will continue to be a significant rise in Crypto Ransomware.  The use of these malicious applications are morphing as we speak.  Originally, they were to gain access to computers and steal data (ie spying/snooping).  Then it was for ad clicks from popups.  Now, malware has taken on the purpose of extorting money directly from the users themselves.  Although this shouldn’t be a surprise to anyone, the tools mentioned above makes it a lot easier to achieve success.

Most Crypto Ransomware use the following tactics:

  1. Use Social Engineering to invoke a user to run an application script
  2. Avoiding detection
    1. Encrypting/Encoding it’s payload (e.g. Base-64)
    2. Using Domain Generation Algorithm (DGA)
    3. Use Tor network
    4. Use Bitcoin and a money laundering network
  3. Use the Registry to reinfect after reboot
    1. 0x06 and 0x08 byte subkey (hidden using regedit)
  4. Disable System Restore or VSS type services
  5. Encrypt all user created files by extension, shares, or folders
  6. Use an existing OR 0-Day exploit/vulnerability
    1. Hijack CLSIDs
      For example, {AB8902B4-09CA-4BB6-B78D-A8F59079A8D5} causes any file in the LocalServer32 subkey to be run any time a folder is opened. By hijacking this CLSID, Poweliks is able to ensure that its registry entry will be launched any time a folder is opened or new thumbnails are created, even if the Watchdog process has been terminated.

10 Prevention Tips:

  1. Back-up your data
  2. Patch and keep software up to date
  3. Run a reputable AV solution (Webroot, Eset, etc)
  4. User Training
  5. Filter executable attachments at the email gateway
  6. Disable files running from AppData/LocalAppData folders (Group Policies)
  7. Do not give users Local Admin privileges
  8. Limited end user access to mapped drives
  9. Use a popup blocker
  10. Show hidden file-extensions

OpenDNS_security

Windows 10 – To Upgrade or Not to Upgrade

Since last year, we’ve been telling our clients to hold off on upgrading.  We even used Group Policy and our Management Agents to disable the upgrade patch.  It’s been a long and treacherous journey, but we finally believe Windows 10 is ready for Prime Time.  We’ve even seen it increase performance in some older machines.  We are now recommending our clients to upgrade to Windows 10 to take advantage of the free licensing and extended support for the OS.  With all the major bugs fixed, we’re confident you will find it to be stable and useful.  Applications are also compatible more often than not.  In fact, all of NetCal’s employees are now on Windows 10.  We did all the testing so our clients don’t have to worry.

Contact us so we can evaluate your environment.

 

Q: If I upgrade, can I use Windows 7/8/8.1 again?

A: You can always reinstall using existing media or downgrade using the built-in Windows 10 recovery process (only works for 1 month after upgrade).

Q: What if I don’t upgrade in time?  How much would a Windows 10 license cost then?

A: Although Microsoft has been rather vague thus far, the general consensus would be that the license would cost $120 for Win10Home and $200 for Win10Pro.

Q: How would I upgrade after the expiration date?

A: For those that fail to upgrade in time or simply chose not to, Windows 10 can be purchased via the Microsoft Store or through Retail Partners.

Q: If I need to reinstall Windows 10, what key can I use?

A: All Windows 7 and Windows 8/8.1 keys will work with the latest Windows 10 installation media.

Q: If I upgrade, will I be charged a subscription service fee after that?

A: According to Microsoft, if you upgrade before July 29th, Windows 10 will continue to be free and supported for the rest of the life of the device.  This is also similar to how your OEM Windows licenses work.

veeam

Save backup storage using Veeam Backup & Replication BitLooker

Introduction

When you need to back up large amounts of data, you want to use up as little disk space as possible in order to minimize backup storage costs. However, with host-based image-level backups, traditional technologies force you to back up the entire virtual machine (VM) image, which presents multiple challenges that were never problems for classic agent-based backups.

For example, during backup analysis using Veeam ONE, you might notice that some VM backups are larger than the actual disk space usage in guest OS, resulting in higher-than-planned backup repository consumption. Most commonly, this phenomenon can be observed with file servers or other systems where a lot of data is deleted without being replaced with new data.

Another big sink for repository disk space consumption is useless files. While you might not need to back up data stored in certain files or directories in the first place, image-level backups force you to do this.

“Deleted” does not necessarily mean actually deleted

It is widely known that in vast majority of most modern file systems deleted files do not disappear from the hard drive completely. The file will only be flagged as deleted in the file allocation table (FAT) of the file system (e.g., the master file table (MFT) in case of NTFS). However, the file’s data will continue to exist on the hard drive until it is overwritten by a new file. This is exactly what makes tools like Undelete even possible. In order to reset the content of those blocks, you have to use tools like SDelete by Windows Sysinternals. This tool effectively overwrites the content of blocks belonging to deleted files with zeroes. Most backup solutions will then dedupe and/or compress these zeroed blocks so they do not take any extra disk space in the backup. However, running SDelete periodically on all your VMs is time consuming and hardly doable when you have hundreds of VMs, so most users simply don’t do this and allow blocks belonging to the deleted files to remain in the backup.

Another drawback of using SDelete is that it will inflate thin-provisioned virtual disks and will require you to use technologies such as VMware Storage vMotion to deflate them after SDelete processing. See VMware KB 2004155 for more information.

Finally, these tools must be used with caution. Because SDelete creates a very big zeroed file, you have to be careful not to affect other production applications on the processed server because that file is temporarily consuming all available free disk space on the volume.

Not backing up useless files in the first place

It goes without saying that there are certain files and directories that you don’t want to back up at all (e.g., application logs, application caches, temporary export files or user directories with personal files). There also might be data protection regulations in place that actually require you to exclude specific objects from backup. However, until today, the only way for most VM backup solutions to filter out useless data was to manually move useless data on every VM to the dedicated virtual drives (VMDK/VHDX) and exclude those virtual drives from processing. Again, because it’s simply not feasible to maintain this approach in large environments with dozens of new VMs appearing daily, most users simply accepted the need to back up useless data with image-based backups as a fact of life.

Meet Veeam BitLooker

Veeam BitLooker is the patent-pending data reduction technology from Veeam that allows the efficient and fully automated exclusion of deleted file blocks and useless files, thus enabling you to save considerable amount of backup storage and network bandwidth and further reduce costs.

The first part of BitLooker was introduced in Veeam Backup & Replication back a few years ago and enabled the exclusion of the swap file blocks from processing. Considering that each VM creates a swap file, which is usually at least 2 GB in size and changes daily, this is a considerable amount of data that noticeably affects full and incremental backup size. However, BitLooker automatically detects the swap file location and determines the blocks backing it in the corresponding VMDK. These blocks are then automatically excluded from processing, replaced with zeroed blocks in the target image and are not stored in a backup file or transferred to a replica image. The resulting savings are easy to see!

BitLooker in v9

In Veeam Backup & Replication v9, BitLooker’s capabilities have extended considerably in order to further improve data reduction ratios. In Veeam Backup & Replication v9, BitLooker has now three distinct capabilities:

  • Excluding swap and hibernation files blocks
  • Excluding deleted files blocks
  • Excluding user-specified files and folders

In v9, BitLooker supports NTFS-formatted volumes only. Most of BitLooker is available right in the Veeam Backup & Replication Standard edition. However, excluding user-specified files and folders requires at least Enterprise edition.

Configuring BitLooker

There are a few options for controlling BitLooker in v9. You can find the first two in the advanced settings of each backup and replication job.

Note that the option to exclude swap file blocks was available in previous product versions, but it was enhanced in v9 to also exclude hibernation files.

Now, there is the new option that enables the exclusion of deleted file blocks:

Users upgrading from previous versions will note that by default, deleted file blocks exclusion remains disabled for existing jobs after upgrading so it doesn’t not alter their existing behavior. You can enable it manually for individual jobs or automatically for all existing jobs with this PowerShell script.

In most cases, you should only expect to see minor backup file size reduction after enabling deleted file blocks exclusion. This is because in the majority of server workloads, data is never simply deleted, but rather always overwritten with new data. More often than not, it is replaced with more data than what was deleted, which is the very reason the world’s data almost doubles every 2 years. However, in certain scenarios (such as those involving data migrations), the gains can be quite dramatic.

Finally, in v9, BitLooker also allows you to configure the exclusion of specific files and folders for each backup job. Unlike previous options, this functionality is a part of the application-aware guest processing logic, and exclusions can only be performed on a running VM. Correspondingly, you can find the file exclusion settings in the advanced settings of guest processing step of the job wizard. You have the option to either exclude specific file system objects or, conversely, back up nothing but specific objects:

When using this functionality, keep in mind that it increases both VM processing time and memory consumption by the data mover, depending on the amount of excluded files. For example, if processing exclusions for 10,000 files takes less than 10 seconds and requires just 50MB of extra RAM, then excluding 100,000 files takes 2 minutes and requires almost 400MB of extra RAM.

Summary

Veeam BitLooker offers users the possibility to further reduce backup storage and network bandwidth consumption without incurring additional costs. Enabling this functionality takes just a few clicks, and the data reduction benefits can be enjoyed in the immediate backup or replication job run.

What results are you seeing after enabling BitLooker in v9? Please share your numbers in the comments!

 

 

Re-posted from : https://www.veeam.com/blog/save-backup-storage-using-veeam-backup-replication-bitlooker.html

Microsoft on Upcoming SQL Server 2016; Goes After Oracle

Data professionals might have been expecting a launch date for SQL Server 2016 at the Data Driven event held today in New York City, but what they got was a recap of the flagship database system’s capabilities and a full-out assault on rival Oracle Corp.

Exec Judson Althoff detailed a SQL Server 2016/Oracle comparison involving a scenario where various capabilities built into SQL Server 2016 were matched up against the Oracle database. “When we say everything’s built in, everything’s built in,” he said. When the built-in capabilities were pitted against similar functionality offered by Oracle products, “Oracle is nearly 12 times more expensive,” he said.

That specific scenario was envisioned with a project starting from scratch. Althoff said not everybody does that, as they have invested in “other technologies.”

Free Licenses for Oracle Switchers
“So if you are willing to migrate off of Oracle, we will actually give you free SQL Server licenses to do so,” Althoff said in his presentation. “For every instance of Oracle you have, free SQL Server licenses. All you have to do is have a Software Assurance agreement with Microsoft. If you’re willing to take this journey with us before the end of June, we’ll actually help and invest in the migration costs, put engineers on the ground to help you migrate off of Oracle.”

 He noted that in the wake of some newspaper ads about the offer, he received e-mails asking just who was eligible. “Everyone is eligible for this,” Althoff said. “We’re super excited to help you migrate off of Oracle technology, lower your overall data processing costs and actually really be enabled and empowered to build the data estate that we’ve been talking about.”

More details on the offer were unveiled in a ” Break free from Oracle ” page on the Microsoft site. “This offer includes support services to kick-start your migration, and access to our SQL Server Essentials for the Oracle Database Administrator training,” the site says. “Dive into key features of SQL Server through hands-on labs and instructor-led demos, and learn how to deploy your applications — on-premises or in the cloud.”

Microsoft also went after Oracle on the security front, citing information published by the National Institute of Standards and Technology that lists databases and their vulnerabilities. On average, over the past few years, exec Joseph Sirosh said in his presentation, SQL Server was found to have 1/10th the vulnerabilities of Oracle.

Always Encrypted
Sirosh also highlighted new security capabilities of SQL Server 2016. “In SQL Server 2016, for the first time, you will hear about a capability that we call Always Encrypted,” he said. “This is about securing data all the way from the client, into the database and keeping it secure even when query processing is being done. At the database site, the data is never decrypted, even in memory, and you can still do queries over it.”

He explained that data is encrypted at the client, and sent to the database in its encrypted form, in which it remains even during query processing. No one can decrypt credit card data, for example, while it’s in the database, not even a DBA. “That’s what you want,” Sirosh said of the functionality enabled by homomorphic encryption.

During today’s event, Microsoft CEO Satya Nadella and other presenters focused on a series of customer success videos and live presentations, reflecting Nadella’s belief that Microsoft “shouldn’t have launch events, but customer success events.”

Those success stories leveraged new ground-breaking capabilities of SQL Server 2016, including in-memory performance across all workloads, mission-critical high availability, business intelligence (BI) and advanced analytics tools.

“We are building this broad, deep, digital data platform,” Nadella said. “This platform is going to help every business become a software business, a data business, an intelligence business. That’s our vision.”

Exec Scott Guthrie took the stage to discuss the new support for in-memory advanced analytics and noted that for these kinds of workloads, data pros can use the R programming language, which he described as the leading open source data science language in the industry. Coincidentally, Microsoft yesterday announced R Tools for Visual Studio for machine learning scenarios.

SQL Server on Linux
Providing one of the few real news announcements during the presentation, Guthrie also noted that a private preview of SQL Server on Linux is available today, following up onsurprising news earlier in the week that SQL Server was being ported to the open source Linux OS, which is expected to be completed in mid-2017. Guthrie said that unexpected move was part of the company’s strategy of bringing its products and services to a broader set of users and “to meet customers where they’re at.”

Another focus of the event was the new “Stretch Database” capability, exemplifying SQL Server 2016’s close connection to the Microsoft Azure cloud.

“SQL Server is also perhaps the world’s first cloud-bound database,” Sirosh said. “That means we build the features of SQL Server in the cloud first, ship them with Azure SQL DB, and customers have been experiencing it for six to nine months and a very large number of queries have been run against them.”

Sirosh expounded more on this notion in a companion blog post published during the event. “We built SQL Server 2016 for this new world, and to help businesses get ahead of today’s disruptions,” he said. “It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always encrypted query processing capabilities and in-memory transactions with persistence. It is also perhaps the world’s only relational database to be ‘born cloud-first,’ with the majority of features first deployed and tested in Azure, across 22 global datacenters and billions of requests per day. It is customer tested and battle ready.”

Stretch Database
Features shipped with SQL server, Sirosh said, “allow you to have wonderful hybrid capabilities, allowing your workload to span both on-premises and the cloud. So Strech Database is one of them. Data in a SQL Server, cold data, can be seamlessly migrated into databases in the cloud. So you have in effect a database of very large capacity, but it’s always queryable. It’s not just a backup. That data’s that’s migrated over is living in a database in the cloud, and when you issue queries to the on-premises database, that query is just transported to the cloud and the data comes back — perhaps a little slower, but all your data is still queryable.”

The new capabilities for querying data of all kinds in various stages and forms were a focal point for Sirosh.

“We have brought the ability to analyze data at incredible speed into the transactional database so you can do not only mission-critical transactional processing, but mission-critical analytic processing as well,” Sirosh said. “It is the database for building mission-critical intelligent applications without extracting and moving the data, and all the slowness that comes with doing so. So you can now build real-time applications that have sophisticated analytical intelligence behind them. That is the one thing that I would love all of you to take away from this presentation.”

 On-Demand Videos for More
At the Data Driven Web site, Microsoft has provided a comprehensive series of videos that explore various separate aspects of SQL Server, with topics ranging from “AlwaysOn Availability Groups enhancements in SQL Server 2016” to others on R services, in-memory OLTP, PolyBase, the Stretch Database, Always Encrypted and many more.

Still some attendees — virtual or otherwise — were disappointed by the lack of real significant news.

“Did this whole thing just finish without so much as a release date?” asked one viewer in a Tweet. “Sigh.”

 

 

Source : https://adtmag.com/Articles/2016/03/10/sql-server-2016.aspx