Cloud services

5 Trends in Enterprise Cloud Services

Even though the future of IT belongs to the cloud, much of the enterprise world is still clinging to legacy systems. In fact, 90 percent of workloads today are completed outside of the public cloud. With this continued resistance to cloud services adoption at the enterprise level, today’s “cloud evangelists” are playing a more important role in the industry than ever before.

The role of a cloud evangelist sits somewhere between the duties of a product marketer and the company’s direct link to customers. These individuals are responsible for spreading the doctrine of cloud computing and convincing reluctant IT admins to make the jump to the cloud. It’s a dynamic role and nowhere is this more apparent than with the talented cloud evangelists at NTT Communications.

In a recent interview from NTT Communications, two of the company’s leading cloud evangelists, Masayuki Hayashi and Waturu Katsurashima, sat down to chat about the current challenges slowing enterprise cloud migration and what companies can do to help mitigate those challenges. In this post, we take a look at the five areas that are top-of-mind for cloud evangelists today.

The Changing Role of Cloud Evangelists

While the role itself may be new, it is not insulated from change. When it was first established, cloud evangelists were responsible for ferrying customers through every stage of cloud adoption. From preliminary fact finding to architecting the final network, cloud evangelists played a prominent role.

Today, that hands-on approach is quickly changing. As Hayashi states in the article, “Evangelists have traditionally played the “forward” position, but recently we are more like “midfielders” who focus on passing the ball to others.” Rather than managing every task involved, evangelists are placing more focus on improving processes and strengthening organizational knowledge.

Slow Migration of On-Premise Systems to the Cloud

The number of enterprises migrating on-premise datacenters to the cloud has been less than stellar in recent years. Cloud services migrations have remained relatively flat year-over-year largely due to the complexities surrounding migration. Enterprises are having difficulty simultaneously managing both their existing infrastructure and bringing new cloud-based services online.

For evangelists, it’s important to consider this added complexity when proposing a migration plan. NTT Communications Chief Evangelist Wataru Katsurashima recommends finding specific solutions that can accommodate the unique needs of enterprise cloud migration such as hybrid cloud infrastructures that function like a single, public cloud environment.

High Migration Costs are Slowing Adoption

The added complexity of enterprise cloud migration also directly affects the price of migration, increasing it beyond reach of some companies. The expensive price tag is forcing many enterprises to rethink their decision to migrate to the cloud, opting instead to delay the migration another year or executing a much slower, gradual migration.

One discussed solution to this high cost is using existing services within the VMware vCloud® Air™ Network. According to Katsurashima, leveraging technology from NTT communications and VMware in a synergistic way can dramatically cut down costs through efficiencies and better utilization.

New Hybrid Cloud Structures on the Horizon

Like the role of cloud evangelists, the hybrid cloud, too, is changing. Hayashi explains:

“In the past, a hybrid cloud was similar to a Japanese hot spring inn, with a single hallway that leads from the hot spring to each guest room. NTT Communications aims to achieve a “modern hotel” model. In other words, there is one front desk and a variety of rooms with different purposes, such as extended-stay rooms for VIPs and rooms for business meetings.”

The hotel analogy provides us with a way to visualize the new features and capabilities that hybrid cloud environments must deliver. Since no one cloud service provider can be everything, establishing greater levels of collaboration and cross-pollination between service providers is critical to success.

cloud management

Redefining the Role of Information System Departments

The days of information system departments only responding to the demands of individual business departments are over. The IT teams that do not help business departments innovate from the inside-out will become increasingly obsolete.


This article was provided by our service partner : Vmware

ransomware

Understanding Cyberattacks from WannaCrypt

Cyberattacks are growing more sophisticated, and more common, with every passing day. They present a continuous risk to the security of necessary data, and create an added layer of complication for technology solution providers and in-house IT departments trying to keep their clients’ systems safe and functional. Despite industry efforts and innovations, the world was exposed to this year’s latest cyberattack with ‘WannaCrypt’ on Friday morning.

The attacked—stemming from “WannaCrypt” software—started in United Kingdom and Spain, and quickly spread globally, encrypting endpoint data and requiring a ransom to be paid using Bitcoin to regain access to the data. WannaCrypt exploits used in the attack were derived from the exploits stolen in an attack on the National Security Agency earlier this year.

On March 14, Microsoft® released a security update to patch this vulnerability and protect customers from this quickly spreading cyberattack. While this protected newer Windows™ systems and computers that had enabled Windows to apply this latest update, many computers remained unpatched globally. Hospitals, businesses, governments, and home computers were affected. Microsoft took additional steps to assist users with older systems that are no longer supported.

Our goal at ConnectWise is to provide partners with the tools they need to support their clients and prevent these kinds of attacks from happening. In addition to our core ConnectWise Automate™ solution, that can detect and alert admins of out-of-date systems, we have partnered with numerous vendors who provide ConnectWise certified integrations for security & business availability to block, prevent, or recover from this attack.

See how our vendors are addressing the latest attack:

-> Bitdefender
-> ESET
-> Webroot
-> Malwarebytes
-> VIPRE
-> Acronis
-> StorageCraft

server RAM

Choose the best server RAM configuration

Watch your machine memory configurations – always be care to implement the best server RAM configuration! You can’t just throw RAM at a physical server and expect it to work the best it possibly can. Depending on your DIMM configuration, you might unwittingly slow down your memory speed, which will ultimately slow down your application servers. This speed decrease is virtually undetectable at the OS level – but – anything that leverages lots of RAM to function, including application servers such as a database server, can take a substantial performance hit on performance.

An example of this is if you wish to configure 384GB of RAM on a new server. The server has 24 memory slots. You could populate each of the memory slots with 16GB sticks of memory to get to the 384GB total. Or, you could spend a bit more money to buy 32GB sticks of memory and only fill up half of the memory slots. Your outcome is the same amount of RAM. Your price tag on the memory is slightly higher than the relatively cheaper smaller sticks.

In this configuration, your 16GB DIMM configuration runs the memory 22% slower than if you buy the higher density sticks. The fully populated 16GB stick configuration runs the memory at 1866 MHz. If you only fill in the 32GB sticks on half the slots, the memory runs at 2400 MHz.

Database servers, both physical and virtual, use memory as an I/O cache, improving the performance of the database engine by reducing the dependency on slower storage and leveraging the speed of RAM to boost performance. If the memory is slower, your databases will perform worse. Validate your memory speed on your servers, both now and for upcoming hardware purchases. Ensure that your memory configuration yields the fastest possible performance – implement the best server RAM configuration -your applications will be better for it!

Cisco Umbrella

Healthcare industry embraces Cisco Umbrella

Healthcare industry expenditures on cloud computing will experience a compound annual growth rate of more than 20% by 2020. The industry has quickly transitioned from being hesitant about the cloud to embracing the technology for its overwhelming benefits.

George Washington University, a world-renowned research university, turned to Cisco Umbrella to protect its most important asset: the global reputation as a research leader.

“We chose Cisco Umbrella because it offered a really high level of protection for our various different user bases, with a really low level of interaction required to implement the solution, so we could start blocking attacks and begin saving IR analyst time immediately,” said Mike Glyer, Director, Enterprise Security & Architecture.

Customers love Umbrella because it is a cloud-delivered platform that protects users both on and off the network. It stops threats over all ports and protocols for the most comprehensive coverage. Plus, Umbrella’s powerful, effective security does not require the typical operational complexity. By performing everything in the cloud, there is no hardware to install, and no software to manually update. The service is a scalable solution for large healthcare organizations with multiple locations, like The University of Kansas Hospital, ranked among the nation’s best hospitals every year since 2007 by U.S. News & World Report.

“Like every hospital, we prioritize the protection of sensitive patient data against malware and other threats. We have to safeguard all network connected medical devices, as a compromise could literally result in a life-or-death situation,” says hospital Infrastructure Security Manager Henry Duong. “Unlike non-academic hospitals, however, our entwinement with medical school and research facility networks means we must also protect a lot sensitive research data and intellectual Property.”

Like many healthcare providers, The University of Kansas Hospital would spend a lot time combing through gigabytes of logs trying to trace infections, point of origin and identify which machines were calling out.  The team turned to Cisco Umbrella for help.

“First we just pointed our external DNS requests to Cisco Umbrella’s global network, which netted enough information to prompt an instant ‘Wow, we have to have this!’ response,” Duong says. “When our Umbrella trial began, we saw an immediate return, which I was able to document using Umbrella reporting and share with executive stakeholders. Those numbers, which ultimately led to executive buy-in, spoke volumes about the instant effect Umbrella had on our network.”

This overwhelming success led the team to later purchase Umbrella Investigate.

“We suddenly went from struggling to track attacks to being able to correlate users with events and trace every click of their online travels. Then, Cisco Umbrella Investigate gave us the power to understand each threat’s entire story from start to finish,” Duong says. “We’re able to dig deep into the analysis to see what users are doing, where they’re going, and pinpoint any contributing behaviors so we can mitigate most efficiently.”

University of Kansas estimate that with Cisco Umbrella – they have :

  • Decreased threats by an estimated 99 percent
  • Shortened investigation time by 75 percent
  • Increased visibility and automation while reducing exposure to ransomware

This article was provided by our Service Partner : Cisco

Disaster Recovery

Improve your disaster recovery reliability with Veeam

The only two certainties in life are death and taxes. In IT, you can add disasters to this short list of life’s universal anxieties. Ensuring disaster recovery reliability is critical to ensure your organisations enduring viability in your chosen marketplace.

Regardless of the size of your budget, people power and level of IT acumen, you will experience application downtime at some point. Amazon’s recent east coast outage is testimony to the fact that even the best and brightest occasionally stumble.

The irony is that while many organizations make significant investments in their disaster recovery (DR) capabilities, most have a mixed track record, at best, with meeting their recovery service level agreements (SLAs). As this chart from ESG illustrates, only 65% of business continuity (BC) and DR tests are deemed successful.

disaster recovery readiness

In his report, “The Evolving Business Continuity and Disaster Recovery Landscape,” Jason Buffington broke down respondents to his DR survey into two camps: “green check markers” and “red x’ers.”

Citing his research, Jason recently shared with me: “Green Checkers assuredly don’t test as thoroughly, thus resulting in a higher passing rate during tests, but failures when they need it most — whereas Red X’ers are likely get a lower passing rate (because they are intentionally looking for what can be improved), thereby assuring a more likely successful recovery when it really matters. One of the reasons for lighter testing is seeking the easy route — the other is the cumbersomeness of testing. If it wasn’t cumbersome, most of us would likely test more.”

DR testing can indeed be cumbersome. In addition to being time consuming, it can also be costly and fraught with risk. The risk of inadvertently taking down a production system during a DR drill is incentive enough to keep testing to a minimum.

But what if there was a cost-effective way to do DR testing that mitigates risk and dramatically reduces the preparation work and the time required to test the recoverability of critical application services?

By taking the risk, cost and hassle out of testing application recoverability, Veeam’s On-Demand Sandbox for Storage Snapshots feature is a great way for organizations to leverage their existing investments in NetApp, Nimble Storage, Dell EMC and Hewlett Packard Enterprise (HPE) Storage to attain the following three business benefits:

  1. Risk mitigation: Many IT decision makers have expressed concerns around their ability to meet end-user SLAs. By enabling organizations to rapidly spin-up virtual test labs that are completely isolated from production, businesses can safely test their application recoverability and proactively address any of their DR vulnerabilities.
  2. Improved ROI: In addition to on-demand DR testing, Veeam can also be utilized to instantly stand-up test/dev environments on a near real-time copy of production data to help accelerate application development cycles. This helps to improve time-to-market while delivering a higher return on your storage investments.
  3. Maintain compliance: Veeam’s integration with modern storage enables organizations to achieve recovery time and point objectives (RTPO) of under 15 minutes for all applications and data. Imagine showing your IT auditor in real-time how quickly you can recover critical business services. For many firms, this capability alone would pay for itself many times over.

Back when I was in school, 65% was considered a passing grade. In the business world, a 65% DR success grade is literally flirting with disaster. DR proficiency may require lots of practice but it also requires Availability software, like Veeam’s, that works hand-in-glove with your storage infrastructure to make application recoveries simpler, more predictable and less risky.


This article was provided by our service partner Veeam.