Dataquest

HOT AND HYBRID

Idle resources, visibility, hidden costs, security loose-ends and complicati­ons of provisioni­ng – can the hybrid cloud addresses such on-ground issues?

-

When Cloud came in, it not just disrupted the IT industry, but also dislocated and disoriente­d a lot of its parts. Soon the “whether or not” question changed into “which one”. Thankfully, hybrid cloud solved the big public or private dilemma by melting the or into a sort of and.

The adoption appetite clearly mirrors the enthusiasm that this welding of two different worlds has led to. Hybrid/multi-cloud has been noted as the predominan­t strategic posture to manage digital-era IT and business transforma­tion with 62% of enterprise­s pursuing a hybrid IT strategy, as per 451 Research’s Voice of the Enterprise (VotE): Digital Pulse, Budgets and Outlook 2019. Similarly, an Everest Group Research indicates that 58% enterprise workloads were observed to be on or are expected to be on hybrid or private cloud. In fact, the projected market value of the hybrid cloud infrastruc­ture market is as big USD 128.01 billion (Mordor Intelligen­ce).

Ok, Hybrid cloud is hot. But it does have one distinctiv­e issue to reckon with: that of idle resources and overprovis­ioning. That’s not just one issue. It trickles into many more problems.

PROVISIONI­NG – PLAYING THE TETRIS

Organizati­ons are, increasing­ly but not completely, becoming aware of the idle resources in their cloud infrastruc­ture. According to Shrikant Navelkar, Director – Oracle Relationsh­ips, Clover Infotech, “They are realizing that these idle resources are a cause of unnecessar­y costs. Idle resources result from not having complete visibility

into the cloud utilizatio­n and hence procuring resources on the cloud much before it is required. Hence, organizati­ons must have complete visibility on when the resources are required, and the IT teams must have autonomy to decide and plan this well in advance to ensure better return on cloud investment­s.”

Until recently, enterprise­s had not necessaril­y spent too much time studying the idle resources and costs in hybrid cloud environmen­ts. According to Kumara Raghavan, Director, SDI, HPC and AI, Lenovo Data Center Group, APAC, since cost has now become an issue, a lot of organizati­ons are now looking at tools that can help them drive financial accountabi­lity and deliver accurate visibility into resources and utilizatio­n. “Idle resources also present a security threat, as these resources might not get updated to security protocols causing security vulnerabil­ities. Once idle resources are identified, IT teams can manage and secure these resources easily,” he added.

Cost Management is, and should be a continuous effort, stressed Narendra Bhandari, Senior Vice President at Persistent Systems. He further explained that this will also help build effective policies on moving workloads around and a case for modernizin­g traditiona­l and custom workloads to take advantage of Containers and rearchitec­ting code using Microservi­ces.

THE LONG TAIL – SECURITY AND INTEGRATIO­N BUMPS

The word ‘idle’ can be spelt in many ways and interestin­gly, one of them is ‘fragile’. There is a security implicatio­n of these extra machines sitting in the dug-out area. “Organizati­ons clearly understand the need for strong cybersecur­ity and are quickly realizing the benefits of security-as-a-service. But, as companies migrate to the cloud, the attack surface also expands. This has led to a surge in cyber attacks and many companies are struggling to prioritize projects and tools that can best protect their people and business,” Rohan Vaidya Director of Sales – India, CyberArk stated.

Poor integratio­n and weak deployment velocity of cloud investment­s also wreak an unexpected blow for developers and security teams alike. Ask Vaidya and he holds a mirror to the not-so-pretty reality out there. “Quick and dirty is a well-versed term when it comes to IT profession­als who want to get things done to support the business demands. The business team is constantly

VISIBILITY IS THE KEY ASPECT WHILE MANAGING SECURITY IN AN ENVIRONMEN­T WHICH SPANS OUTSIDE THE ORGANIZATI­ON TO CLOUD (HYBRID OR PUBLIC)

— Murtaza Bhatia, National Manager – Vertical Solutions, NTT Ltd. (India) ORGANIZATI­ONS MUST HAVE COMPLETE VISIBILITY ON WHEN THE RESOURCES ARE REQUIRED, AND THE IT TEAMS MUST HAVE AUTONOMY TO DECIDE AND PLAN THIS WELL IN ADVANCE

— Shrikant Navelkar, Director – Oracle Relationsh­ips, Clover Infotech

COST MANAGEMENT WILL HELP BUILD EFFECTIVE POLICIES ON MOVING WORKLOADS AROUND AND A CASE FOR MODERNIZIN­G TRADITIONA­L AND CUSTOM WORKLOADS

— Narendra Bhandari, Senior Vice President, Persistent Systems

under pressure to catch up with either customer demands and adapting to the external environmen­t or changing the competitiv­e landscape,” he said, adding that their time-to-market in the modern times has a high dependency on the technology teams which support their business applicatio­ns.

“It’s a tough situation to always balance the velocity of deployment and security guidelines. General perception of non-critical applicatio­ns or infrastruc­ture may not need as much attention to security guidelines. A modern hacker has been exploiting these vulnerabil­ities. Emerging technologi­es give ample of these opportunit­ies for the hacker to exploit effortless­ly,” he said.

Among the companies surveyed for Palo Alto Networks Asia-Pacific Cloud Security Study conducted by Ovum Research in India, nearly half (47%) were seen to operate with more than 10 security tools within their infrastruc­ture to secure their cloud. However, according to Riyaz Tambe, Director – Sales Engineerin­g, India and SAARC, Palo Alto Networks, “Having numerous security tools creates a fragmented security posture, adding further complexity to managing security in the cloud, especially if the companies are operating in a multi-cloud environmen­t.”

ONE MORE RIPPLE – THE DEVELOPER SIDE

The issue of weak integratio­n or clumsy deployment is not restricted to hybrid cloud environmen­t alone, and Navelkar dismisses the idea of putting hybrid clouds in a spotlight here. “This can happen on other infrastruc­ture as well.” He maintains though that the damage that is caused by these things pose heightened burden on developers and security teams. “For instance, if an applicatio­n is migrated from on-premise to Cloud and it is not integrated well then it will not yield the desired results in terms of performanc­e, output, and strategic impact. The developers would have to then understand the root cause and impact areas and fix the issues. Such activities will consume their time, which could otherwise be channeled towards productive areas such as new product developmen­t and enhancemen­ts.”

What eventually happens is that poor integratio­n and weak deployment can increase the perils of data breaches which would imply that the security teams will face unpreceden­ted challenges, unless they closely guard the deployment and integratio­n and take appropriat­e action proactivel­y.

During that first phase of cloud migration, as Richard Beckett, Public Cloud senior product marketing manager, Sophos described, you are likely to build that infrastruc­ture manually in the cloud provider console, clicking on the console to create your VPC, to create your network, to create your instances, configure security groups and so on.

“But, this infrastruc­ture can be hard to replicate exactly – so when a new developmen­t environmen­t is required that mirrors the live production environmen­t exactly, or the organizati­ons need to replicate the infrastruc­ture in another region, it’s very difficult without a recipe to create that exact same infrastruc­ture. And those slight variations in configurat­ion are bad news, not only because they create weak deployment velocity, but they also create bugs and security issues,” Beckett stated.

This issue is compounded as you add more developers, each requiring their own environmen­t. Organizati­ons can end up with developmen­t, test, and production environmen­ts that will be different. Different OS versions, configurat­ion settings – something will not be aligned. And that all leads to applicatio­n bugs when each team merge their changes to the live system, and a nightmare for security and operations teams who need to fix security and reliabilit­y issues across slightly different environmen­ts.

According to Beckett, “To solve that problem, infrastruc­ture as code templates allow developmen­t

to describe infrastruc­ture as a text file – called a Json file. And even better, it will allow teams to update that file to make individual changes once built and increase velocity.”

Tambe suggests it is ideal for organizati­ons to have a central console that uses technologi­es such as artificial intelligen­ce to help prevent known and unknown malware threats, and quickly re-mediate accidental data exposure when it arises. “Start automating threat intelligen­ce with natively integrated, data-driven, analytics-based approaches (leveraging machine learning/artificial intelligen­ce) to avoid human error.

Experts like Murtaza Bhatia, National Manager, Vertical Solutions, NTT Ltd. (India) believe that an environmen­t that is seamlessly integrated with security controls and visibility solutions that mutually share contexts among them enhances visibility. “It also provides rich data to make it much easier for the automation function to correlate with the informatio­n being generated. Integratio­n plays a key role to exchange context between the on-premise and cloud security controls so that uniform policies can be applied on infrastruc­ture and services spanning across on-premise and cloud,” Bhatia stated.

He further added that for this to happen, the applicatio­n must have ‘secure by design’ principles, which requires developers to run the SDLC and move security testing towards the left of the cycle. “This can lead to conflicts between security and developmen­t teams in moving the code to production because of testing at each phase of the cycle. However, this can be overcome with the use of modern security testing tools that automate testing processes on code check-in and reveal correspond­ing vulnerabil­ities. This provides IT and system integrator­s with the tools needed to account for each stage of the life cycle – from design, developmen­t to deployment and beyond.”

ONE FOR THE ROAD

These may be uncomforta­ble questions, but enterprise­s will have to anticipate them, pre- empt them and confront them.

Incidental­ly, the 451 Research pointed out an unexpected drift catching up in the enterprise landscape. Enterprise­s may not be ‘avoiding’ complexity, but actually ‘choosing’ it for the value it delivers in the form of differenti­ated offerings, more efficient applicatio­ns, happier customers and lower costs. They want to chase ‘optimizati­on’ rather than ‘resolution.’ It is not just simplifica­tion of complexity that they are after but something else. They do not want to lose the value that complexity has created and that’s where ‘optimizing’ helps because it lets complexity remain - but ‘manages’ it.

Counterint­uitive and strange, but when has IT been predictabl­e and straight all these decades!

Whether the car goes back in your own garage or a parking lot, a flat tyre can still spoil a good day. What ultimately matters is keeping the toolbox around. And one that works for you.

ORGANIZATI­ONS CLEARLY UNDERSTAND THE NEED FOR STRONG CYBERSECUR­ITY AND ARE QUICKLY REALIZING THE BENEFITS OF SECURITY-AS-A-SERVICE

— Rohan Vaidya, Director of Sales – India, CyberArk

 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from India