VMware vCenter Converter

VMware vCenter Converter : Tips and Best Practices

Vmware vCenter converter can convert Windows and Linux based physical machine and Microsoft hyper-v systems into Vmware virtual machines.

Here are some tips and suggested best practices

Tasks to perform before conversion :

  • Make sure you know the local Administrator password! If the computer account gets locked out of the domain – you are likely going to need to login locally to recover
  • Ensure you are using the latest version of Vmware vCenter converter.
  • If possible, install Vmware vCenter Converter locally on the source (physical machine) operating system.
  • Make a note of the source machine IP addresses. The conversion will create a new NIC and having those IP details handy will help.
  • Disable any anti-virus
  • Disable SSL encryption – this should speed up the conversion ( described here )
  • If you have stopped and disabled any services – make sure to take a note of their state beforehand. A simple screenshot goes a long way here!
  • If converting from hyper-v -> vmware. Install the Converter on the host and power down the converter before starting the conversion.
  • Uninstall any hardware specific software utilies from the source server
  • If the source system has any redundant NICs – I would suggest removing them in the Edit screen on the converter ui.
  • For existing NICs – use the VMXNET3 driver and set it to not connected.

Special considerations for Domain Controllers, MS exchange and SQL servers.

Although – You tend to get warned off converting Domain controllers, they do work OK if you take some sensible precautions:

  • Move FSMO roles to Another Domain Controller
  • Make another Domain Controller PDC
  • Stop Active Directory services
  • Stop DHCP service ( if applicable )
  • Stop DNS service ( if applicable )

For SQL and Exchange, you should stop and disable all Exchange and SQL services on the source machine and only start them back up on the target VM once you are happy the server is successfully back on the domain.

( note these steps are not necessary for V2V conversations and you should have the system powered off!)

________________________________________________

Tasks to perform after conversion :

  • Once the conversion has successfully completed, get the source physical machine off the network. You can disable the NIC, pull the cable and/or power it down. It should not come up again.
  • For V2V conversion, delete the NIC from the systems hardware properties completely.
  • Once the physical machine is off the network, bring the virtual machine up (ensure network is not connected initially )
  • Install VMwares and set the ip config ( that you noted during the pre-conversion steps )
  • Shutdown and connect the network and bring your Virtual system back up
  • Uninstall VMware vCenter Converter from the newly converted Virtual macine

Special considerations for Domain Controllers, MS exchange and SQL servers.

  • Create test user on DC and ensure he gets replicated to the other ones.
  • Delete this test and ensure that gets replicated
  • Create test GPO policy and ensure it replicates across all domain controllers
  • Check system, application and importantly the File Replication Service logs to ensure that their is no issues with replication.

 

For SQL and Exchange : double check that their is no trust issues on the virtual machine. Try connecting to the ADMIN$ share from multiple locations. If you do find the computer account locked out. Taking the machine in and out of the domain normally fixes it.

Once happy the machine is on your domain without any trust issues – restart and reconfigure the SQL/Exchange services as per how they originally were.

 

Windows Server 2016

Windows Server 2016 docs are now on docs.microsoft.com

Microsoft have recently announced that their IT pro technical documentation for Windows Server 2016 and Windows 10 and Windows 10 Mobile is now available at docs.microsoft.com.

docs.microsoft.com

Why move to docs.microsoft.com?

Well here microsoft promise:

“a crisp new responsive design that looks fantastic on your phone, tablet, and PC. But, more importantly, you’ll see new ways to engage with Microsoft and contribute to the larger IT pro community. From the ground up, docs.microsoft.com that offers:

  • A more modern, community-oriented experience that’s open to your direct contribution and feedback.
  • Improved content discoverability and navigation, getting you to the content you need – fast.
  • In article Comments and inline feedback.
  • Downloadable PDF versions of key IT pro content collections and scenarios. To see this in action, browse to the recently released Performance Tuning Guidelines for Windows Server 2016 articles, and click Download PDF.
  • Active and ongoing site improvements, including new features, based on your direct feedback. Check out the November 2016 platform update post to see the latest features on docs.microsoft.com.”

How to contribute to IT pro content

Microsoft recognize that customers are eager to share best practices, optimizations, and samples with the larger IT pro community. Docs.microsoft.com makes contribution easy.

Community contributions are open for your contribution. Learn more about editing an existing IT pro article.

Windows 10

Proxmox

Open Source Hypervisors and Hyperconverged Environments

We recently started looking at some of the open source solutions such as KVM/QEMU offered by RedHat and Proxmox to replace Microsoft Hyper-V and VMWare vSphere. So far they do appear somewhat feature-full especially for smaller environments. It appears though they do fall short of Enterprise features.

The performance and simplicity were definitely appealing with these solutions. Some of our staff was really into the Linux aspect of them since the Hypervisors have a full Linux shell. Controlling the enviornment easily from a cli was definitely a plus along with the common feel of logfiles and Linux kernel options.

Everything was promising but we got to the point of backing up multi-terabyte VM environments and the flexibility offered by common tools wasn’t working well enough for what we wanted to do. Products such as Veeam really do make it easy for even entry level administrators to use the complex environments.

For now we’ll be sticking with the big boys and keeping a close eye on the developments of Change Block Tracking in libvirt and the user-space tools in the coming year.

Cloud services

5 Trends in Enterprise Cloud Services

Even though the future of IT belongs to the cloud, much of the enterprise world is still clinging to legacy systems. In fact, 90 percent of workloads today are completed outside of the public cloud. With this continued resistance to cloud services adoption at the enterprise level, today’s “cloud evangelists” are playing a more important role in the industry than ever before.

The role of a cloud evangelist sits somewhere between the duties of a product marketer and the company’s direct link to customers. These individuals are responsible for spreading the doctrine of cloud computing and convincing reluctant IT admins to make the jump to the cloud. It’s a dynamic role and nowhere is this more apparent than with the talented cloud evangelists at NTT Communications.

In a recent interview from NTT Communications, two of the company’s leading cloud evangelists, Masayuki Hayashi and Waturu Katsurashima, sat down to chat about the current challenges slowing enterprise cloud migration and what companies can do to help mitigate those challenges. In this post, we take a look at the five areas that are top-of-mind for cloud evangelists today.

The Changing Role of Cloud Evangelists

While the role itself may be new, it is not insulated from change. When it was first established, cloud evangelists were responsible for ferrying customers through every stage of cloud adoption. From preliminary fact finding to architecting the final network, cloud evangelists played a prominent role.

Today, that hands-on approach is quickly changing. As Hayashi states in the article, “Evangelists have traditionally played the “forward” position, but recently we are more like “midfielders” who focus on passing the ball to others.” Rather than managing every task involved, evangelists are placing more focus on improving processes and strengthening organizational knowledge.

Slow Migration of On-Premise Systems to the Cloud

The number of enterprises migrating on-premise datacenters to the cloud has been less than stellar in recent years. Cloud services migrations have remained relatively flat year-over-year largely due to the complexities surrounding migration. Enterprises are having difficulty simultaneously managing both their existing infrastructure and bringing new cloud-based services online.

For evangelists, it’s important to consider this added complexity when proposing a migration plan. NTT Communications Chief Evangelist Wataru Katsurashima recommends finding specific solutions that can accommodate the unique needs of enterprise cloud migration such as hybrid cloud infrastructures that function like a single, public cloud environment.

High Migration Costs are Slowing Adoption

The added complexity of enterprise cloud migration also directly affects the price of migration, increasing it beyond reach of some companies. The expensive price tag is forcing many enterprises to rethink their decision to migrate to the cloud, opting instead to delay the migration another year or executing a much slower, gradual migration.

One discussed solution to this high cost is using existing services within the VMware vCloud® Air™ Network. According to Katsurashima, leveraging technology from NTT communications and VMware in a synergistic way can dramatically cut down costs through efficiencies and better utilization.

New Hybrid Cloud Structures on the Horizon

Like the role of cloud evangelists, the hybrid cloud, too, is changing. Hayashi explains:

“In the past, a hybrid cloud was similar to a Japanese hot spring inn, with a single hallway that leads from the hot spring to each guest room. NTT Communications aims to achieve a “modern hotel” model. In other words, there is one front desk and a variety of rooms with different purposes, such as extended-stay rooms for VIPs and rooms for business meetings.”

The hotel analogy provides us with a way to visualize the new features and capabilities that hybrid cloud environments must deliver. Since no one cloud service provider can be everything, establishing greater levels of collaboration and cross-pollination between service providers is critical to success.

cloud management

Redefining the Role of Information System Departments

The days of information system departments only responding to the demands of individual business departments are over. The IT teams that do not help business departments innovate from the inside-out will become increasingly obsolete.

 


This article was provided by our service partner : Vmware

ransomware

Understanding Cyberattacks from WannaCrypt

Cyberattacks are growing more sophisticated, and more common, with every passing day. They present a continuous risk to the security of necessary data, and create an added layer of complication for technology solution providers and in-house IT departments trying to keep their clients’ systems safe and functional. Despite industry efforts and innovations, the world was exposed to this year’s latest cyberattack with ‘WannaCrypt’ on Friday morning.

The attacked—stemming from “WannaCrypt” software—started in United Kingdom and Spain, and quickly spread globally, encrypting endpoint data and requiring a ransom to be paid using Bitcoin to regain access to the data. WannaCrypt exploits used in the attack were derived from the exploits stolen in an attack on the National Security Agency earlier this year.

On March 14, Microsoft® released a security update to patch this vulnerability and protect customers from this quickly spreading cyberattack. While this protected newer Windows™ systems and computers that had enabled Windows to apply this latest update, many computers remained unpatched globally. Hospitals, businesses, governments, and home computers were affected. Microsoft took additional steps to assist users with older systems that are no longer supported.

Our goal at Netcal is to provide partners with the tools they need to support their clients and prevent these kinds of attacks from happening.

See how our vendors are addressing the latest attack:

-> Bitdefender
-> ESET
-> Webroot
-> Malwarebytes
-> VIPRE
-> Acronis
-> StorageCraft

server RAM

Choose the best server RAM configuration

Watch your machine memory configurations – always be care to implement the best server RAM configuration! You can’t just throw RAM at a physical server and expect it to work the best it possibly can. Depending on your DIMM configuration, you might unwittingly slow down your memory speed, which will ultimately slow down your application servers. This speed decrease is virtually undetectable at the OS level – but – anything that leverages lots of RAM to function, including application servers such as a database server, can take a substantial performance hit on performance.

An example of this is if you wish to configure 384GB of RAM on a new server. The server has 24 memory slots. You could populate each of the memory slots with 16GB sticks of memory to get to the 384GB total. Or, you could spend a bit more money to buy 32GB sticks of memory and only fill up half of the memory slots. Your outcome is the same amount of RAM. Your price tag on the memory is slightly higher than the relatively cheaper smaller sticks.

In this configuration, your 16GB DIMM configuration runs the memory 22% slower than if you buy the higher density sticks. The fully populated 16GB stick configuration runs the memory at 1866 MHz. If you only fill in the 32GB sticks on half the slots, the memory runs at 2400 MHz.

Database servers, both physical and virtual, use memory as an I/O cache, improving the performance of the database engine by reducing the dependency on slower storage and leveraging the speed of RAM to boost performance. If you are wanting to know how to quickly setup your own web server, then click the link above you will be able to create your own server very easily. If the memory is slower, your databases will perform worse. Validate your memory speed on your servers, both now and for upcoming hardware purchases. Ensure that your memory configuration yields the fastest possible performance – implement the best server RAM configuration -your applications will be better for it!

Cisco Umbrella

Healthcare industry embraces Cisco Umbrella

Healthcare industry expenditures on cloud computing will experience a compound annual growth rate of more than 20% by 2020. The industry has quickly transitioned from being hesitant about the cloud to embracing the technology for its overwhelming benefits.

George Washington University, a world-renowned research university, turned to Cisco Umbrella to protect its most important asset: the global reputation as a research leader.

“We chose Cisco Umbrella because it offered a really high level of protection for our various different user bases, with a really low level of interaction required to implement the solution, so we could start blocking attacks and begin saving IR analyst time immediately,” said Mike Glyer, Director, Enterprise Security & Architecture.

 

Customers love Umbrella because it is a cloud-delivered platform that protects users both on and off the network. It stops threats over all ports and protocols for the most comprehensive coverage. Plus, Umbrella’s powerful, effective security does not require the typical operational complexity. By performing everything in the cloud, there is no hardware to install, and no software to manually update. The service is a scalable solution for large healthcare organizations with multiple locations, like The University of Kansas Hospital, ranked among the nation’s best hospitals every year since 2007 by U.S. News & World Report.

“Like every hospital, we prioritize the protection of sensitive patient data against malware and other threats. We have to safeguard all network connected medical devices, as a compromise could literally result in a life-or-death situation,” says hospital Infrastructure Security Manager Henry Duong. “Unlike non-academic hospitals, however, our entwinement with medical school and research facility networks means we must also protect a lot sensitive research data and intellectual Property.”

Like many healthcare providers, The University of Kansas Hospital would spend a lot time combing through gigabytes of logs trying to trace infections, point of origin and identify which machines were calling out.  The team turned to Cisco Umbrella for help.

“First we just pointed our external DNS requests to Cisco Umbrella’s global network, which netted enough information to prompt an instant ‘Wow, we have to have this!’ response,” Duong says. “When our Umbrella trial began, we saw an immediate return, which I was able to document using Umbrella reporting and share with executive stakeholders. Those numbers, which ultimately led to executive buy-in, spoke volumes about the instant effect Umbrella had on our network.”

This overwhelming success led the team to later purchase Umbrella Investigate.

“We suddenly went from struggling to track attacks to being able to correlate users with events and trace every click of their online travels. Then, Cisco Umbrella Investigate gave us the power to understand each threat’s entire story from start to finish,” Duong says. “We’re able to dig deep into the analysis to see what users are doing, where they’re going, and pinpoint any contributing behaviors so we can mitigate most efficiently.”

University of Kansas estimate that with Cisco Umbrella – they have :

  • Decreased threats by an estimated 99 percent
  • Shortened investigation time by 75 percent
  • Increased visibility and automation while reducing exposure to ransomware

This article was provided by our Service Partner : Cisco

Disaster Recovery

Improve your disaster recovery reliability with Veeam

The only two certainties in life are death and taxes. In IT, you can add disasters to this short list of life’s universal anxieties. Ensuring disaster recovery reliability is critical to ensure your organisations enduring viability in your chosen marketplace.

Regardless of the size of your budget, people power and level of IT acumen, you will experience application downtime at some point. Amazon’s recent east coast outage is testimony to the fact that even the best and brightest occasionally stumble.

The irony is that while many organizations make significant investments in their disaster recovery (DR) capabilities, most have a mixed track record, at best, with meeting their recovery service level agreements (SLAs). As this chart from ESG illustrates, only 65% of business continuity (BC) and DR tests are deemed successful.

disaster recovery readiness

In his report, “The Evolving Business Continuity and Disaster Recovery Landscape,” Jason Buffington broke down respondents to his DR survey into two camps: “green check markers” and “red x’ers.”

Citing his research, Jason recently shared with me: “Green Checkers assuredly don’t test as thoroughly, thus resulting in a higher passing rate during tests, but failures when they need it most — whereas Red X’ers are likely get a lower passing rate (because they are intentionally looking for what can be improved), thereby assuring a more likely successful recovery when it really matters. One of the reasons for lighter testing is seeking the easy route — the other is the cumbersomeness of testing. If it wasn’t cumbersome, most of us would likely test more.”

DR testing can indeed be cumbersome. In addition to being time consuming, it can also be costly and fraught with risk. The risk of inadvertently taking down a production system during a DR drill is incentive enough to keep testing to a minimum.

But what if there was a cost-effective way to do DR testing that mitigates risk and dramatically reduces the preparation work and the time required to test the recoverability of critical application services?

By taking the risk, cost and hassle out of testing application recoverability, Veeam’s On-Demand Sandbox for Storage Snapshots feature is a great way for organizations to leverage their existing investments in NetApp, Nimble Storage, Dell EMC and Hewlett Packard Enterprise (HPE) Storage to attain the following three business benefits:

  1. Risk mitigation: Many IT decision makers have expressed concerns around their ability to meet end-user SLAs. By enabling organizations to rapidly spin-up virtual test labs that are completely isolated from production, businesses can safely test their application recoverability and proactively address any of their DR vulnerabilities.
  2. Improved ROI: In addition to on-demand DR testing, Veeam can also be utilized to instantly stand-up test/dev environments on a near real-time copy of production data to help accelerate application development cycles. This helps to improve time-to-market while delivering a higher return on your storage investments.
  3. Maintain compliance: Veeam’s integration with modern storage enables organizations to achieve recovery time and point objectives (RTPO) of under 15 minutes for all applications and data. Imagine showing your IT auditor in real-time how quickly you can recover critical business services. For many firms, this capability alone would pay for itself many times over.

Back when I was in school, 65% was considered a passing grade. In the business world, a 65% DR success grade is literally flirting with disaster. DR proficiency may require lots of practice but it also requires Availability software, like Veeam’s, that works hand-in-glove with your storage infrastructure to make application recoveries simpler, more predictable and less risky.


This article was provided by our service partner Veeam.

veeam

Veeam : Ransomware resiliency – The endpoint is a great place to start

Fighting ransomware has become a part of doing business today. Technology professionals around the world are advocating many ways to stay resilient. The most effective method is to have end-user training on how to handle and operate attachments and connectivity to the Internet. One other area to look is frequent endpoint devices: Laptops and PCs.

Veeam has taken ransomware resiliency seriously for a while. We’ve put out a number of posts such as early tips for some of the first attacks and some practical tips when using Veeam Backup & Replication. Now with Veeam Agent for Linux and Veeam Endpoint Backup FREE available as well as Veeam Agent for Microsoft Windows (coming VERY soon) as options for laptops and PCs, it’s time to take ransomware resiliency seriously on these devices.

Before I go too far, it’s important to note that ransomware can exist on both Windows and Linux systems. Additionally, ransomware is not just a PC problem (see recent survey blogpost), as at Veeam we see it nearly every day in technical support for virtual machines. We’ll see more content coming for the virtual machine side of the approach for most resiliency, in this post I’ll focus on PCs and Laptops.

Veeam Agent for Linux is the newest product in which Veeam has offered image-based Availability for non-virtualized systems. Veeam Agent for Linux is a great way to do backups of many different Linux systems with a very intuitive user interface:

veeam linux agent

For ransomware resiliency for Veeam Agent for Linux, putting backups on a different file system will be very easy to do with the seamless integration with Veeam Availability Suite. In this way, backups of Veeam Agent for Linux systems can be placed in Veeam Backup & Replication repositories. They also can be used in the Backup Copy Job function. This way, the Linux backups can be placed on different file systems to avoid propagation of ransomware across the source Linux system and the backups. The Backup Copy Job of Veeam Agent for Linux is shown below writing Linux backups to a Windows Server 2016 ReFS backup repository:

veeam backup copy config

Now, let’s talk about Microsoft operating systems and resiliency against ransomware when it comes to backups. Veeam Endpoint Backup FREE will soon be renamed to Veeam Agent for Microsoft Windows. Let’s explain this changing situation here briefly. Veeam Endpoint Backup FREE was announced at VeeamON in 2014 and since it has been available, it has been downloaded over 1,000,000 times. From the start, it has always provided backup Availability for desktop and server-class Windows operating systems. However, it didn’t have the application-aware image processing support and technical support service. Veeam Agent for Microsoft Windows will introduce these key capabilities as well as many more.
For Veeam Agent for Microsoft Windows, you also can put backups on several different storage options. Everything from NAS systems to removable storage, a Linux path, tape media, a deduplication appliance when integrated with Veeam Availability Suite and more. The removable storage is of interest as it may be the only realistic option for many PC or laptop systems. A while ago, Veeam implemented a feature to eject removable media at the completion of a backup job. This option is available in the scheduling option and when the backup target is a removable media and is shown below:

veeam backup schedule

This simple option can indeed make a big difference. We even had a user share a situation where ransomware encrypted one’s backups. This underscores a need for completely offline backups or otherwise some form of an “air gap” between backup data and production systems. Thus, behave as if when you have ransomware in your organization the only real solution is to restore from backup after it is contained. There is a whole practice of inbound detection and prevention but if it gets in, backup is your only option. Having media eject offline is another mechanism that even with isolated PCs and laptops can have more Availability by having the backup storage offline.
Availability in the ransomware era is a never-ending practice of diligence and configuration review. Additionally, the arsenal of threats will always become more sophisticated to meet our new defenses.


This post was provided by our service partner : Veeam

cyber secuirty

Cyber Security: Cyber-Threat Trends to Watch for in 2017

Faced with the volume and rapid evolution of cyber threats these days, technology solution providers (TSPs) may find offering cyber security to be a daunting task. But with the right knowledge to inform your security decisions, and the right solutions and mitigation strategies in place, organizations like yours can keep customers ahead of the rushing malware tide.

The Webroot team recently released the latest edition of their annual Threat Report, which gives crucial insight into the latest threat developments based on trends observed over the last year, the challenges they bring, and how to defeat them. Let’s review 2016’s Threat Report highlights.

The New Norm: Polymorphism

In the last few years, the biggest trend in malware and potentially unwanted applications (PUAs) observed by Webroot has been polymorphic executables. Polymorphic spyware, adware, and other attacks are generated by attackers so that each instance is unique in an effort to defeat traditional defense strategies.

Traditional cyber security relies on signatures that detect one instance of malware delivered to a large number of people. It’s virtually useless for detecting a million unique malware instances as they are delivered to the same number of people. Signature-based approaches will never be fast enough to prevent polymorphic breaches.

During 2016, approximately 94% of the malware and PUA executables observed by Webroot were only seen once, demonstrating how prevalent polymorphism is. Oddly enough, however, the percentage of malicious executables related to malware and PUAs has dropped significantly over the past 3 years, a 23% and 81% decline, respectively.

While this decline in the volume of new malware encountered by Webroot customers is a decidedly positive trend, TSPs and their customers should continue to treat malware as a major threat. Approximately one in every 40 new executable file instances observed in 2016 was malware. These types of files are customized and often designed to target individuals, and cannot be stopped by traditional antimalware technologies.

Ransomware Continues to Rise

You’ve probably heard about at least one of the numerous ransomware attacks that have crippled hospitals and other institutions. According to the FBI, cyber criminals were expected to collect over $1 billion in ransoms during 2016.[1] It’s quite likely that actual losses suffered were even higher, given the disruption of productivity and business continuity, as well as a general reluctance to report successful ransomware attacks.

In 2017, Webroot anticipates that ransomware will become an even larger problem. According to the Webroot cyber security Threat Research team, the following are 3 ransomware trends to be aware of:

Locky, the most successful ransomware of 2016

In its first week in February 2016, Locky infected over 400,000 victims, and has been estimated to have earned over $1 million a day since then.[2] Throughout 2016, Locky evolved not only to use a wide variety delivery methods, but also to camouflage itself to avoid detection and to make analysis more difficult for security researchers. Locky shows no signs of slowing down, and is likely to be equally prolific in the coming year.

Exploit Kits

The second important trend involves the frequent changes in the exploit kits ransomware authors use. As an example, most exploit kit ransomware in the first half of 2016 was distributed using Angler or Neutrino. By early June, Angler-based ransomware had virtually disappeared, as cybercriminals began switching to Neutrino. A few months later, Neutrino also disappeared. Toward the end of 2016, the most commonly used exploit kits were variants of Sundown and RIG, most of which support Locky.

Ransomware as a Service (Raas)

Despite having emerged in 2015, ransomware-as-a-service (RaaS) didn’t find its place in the ransomware world until 2016. RaaS enables cybercriminals with neither the resources nor the know-how to create their own ransomware and easily generate custom attacks. The original authors of the RaaS variant being used gets a cut of any paid ransoms. RaaS functions similarly to legitimate software, with frequent updates and utilities to help distributors get the most out of their service. The availability and ease of RaaS likely means even greater growth in ransomware incidents.

Stay Informed

The best defense is knowing your enemy. Download the complete 2017 Webroot Threat Report to get in-depth information on the trends we’ve explored above, as well as other crucial insights into phishing, URL, and mobile threats.

[1] http://money.cnn.com/2016/04/15/technology/ransomware-cyber-security/index.html

[2] http://www.smartdatacollective.com/david-balaban/412688/locky-ransomware-statistics-geos-targeted-amounts-paid-spread-volumes-and-much-

————————————————————————————————————————–
The information above was provided by our service partner : Webroot.