The evolution of smaller, more flexible data centres will bring better levels of service and virtualisation closer to home
“Oil and gas is the original big data industry” Charles Karren, senior director of oil and gas industry strategy, Oracle
Several different sector data trends have emerged over the last couple of years, namely the deep water offshore, and the unconventional onshore, primarily in the US but also in Australia. The amount of information coming out of these assets has increased exponentially. What makes it interesting is the ability to manage this data back at the operation centres.
Most companies now have what are called ROCs ( Remote Operations Centres) where they are able to look at data in real- time on an oil rig. While this is probably one of the biggest evolutionary, rather than dramatic, changes ~ being able to manage the data onsite as well as back at the operations centre is an increasing necessity.
“One thing we are working very closely on now is to develop mobile solutions that will be able to take application data and manage it remotely on any kind of far- flung asset,” said Charles Karren, Oracle’s senior director of oil and gas industry strategy. To increase its depth of offering, as well as developing mobility applications, Oracle bought multiple companies and integrated asset management for predictive and preventive maintenance ~ a big part of the big data component. “Once you are able to see, use and respond to the different data that’s being collected from multiple assets and different environments, and achieve more real time and interactive visibility of day- to- day operations, the net effect is a reduction in decision lag- time and more efficient operations,” he added.
In terms of big data, the oil and gas industry will evolve to enable more data analytics, from a wide range of sources, from which to develop best practices. Having better well- history, from local activities as well as from other operations elsewhere, can help with collaboration, increase operational efficiency and reduce non- productive time.
Capture, Keep, Use
However, as the big data rolls in so does a resource problem for managing it. Many organisations are struggling to find a way to sit back, think and analyse what the data is saying. Some company infrastructures simply can’t handle such volumes of information so it gets thrown out leaving only a limited data set. Some are turning to an engineered system ~ containing both hardware and software components ~ that can not only capture big data, it can hang on to it and use it to make predictive recommendations.
Should, for example, a drill bit become stuck in a particular type of rock formation or the drill site collapsing around the tool, all too often operations have to stop until the situation is resolved. Today, there are two choices of approach: Prescriptively, with vibration or fishing antics to shake and lift the tool out; and preventatively, by referencing the risks from prior incidents in similar ( although not necessarily local) environments and planning accordingly. These references could be from deep water in the Gulf of Mexico, the North Sea, offshore in Brazil, West Africa or somewhere else. Previously, what happened and when it happened was never recorded. Now that this data is being kept, operators will be able to analyse and implement risk assessments to make better recommendations for future operational approaches.
Partnerships & Standards
Partnerships, regarding data handling, are growing too. For example, PPDM ( Professional Petroleum Data Management Association) is talking about how to manage well- data ( big data), and Energistics is an organisation that helps integrate that data from, for example, a data warehouse. “We at Oracle are big supporters of those because they’re open standard, and the industry ~ the oil and gas companies themselves ~ are part of these innovations. “We do not advocate for proprietary or closed systems; we are very much an open standards company. That helps people, helps the industry, helps these companies to collaborate better and in a more efficient way,” said Karren.
Desperate Need for Real Standards
While there are standards associations such as European Data Services Association, Data Centre Alliance ( also based in Europe), and ECO ( Germany), there is a huge need for an international benchmark developed by an independent body.
Currently, the largest is the Uptime Institute’s tiering standard, which is by far the most commonly applied to a data centre. However, there are a number of problems with the current system, namely that people usually self- declare as Tier 3, Tier 3 plus or Tier 3 star. Consider that in the UK, by way of an example, there are only two certified Tier builds ~ yet operators will claim to be Tier 3 or above, which only confuses the issue.
“There are various standards and various bodies but the problem is that they are so widely abused that they are becoming irrelevant. There is a desperate need for real standards to come out,” said Alex Rabbetts, managing director at MigSolv, pictured.
In the last year or so, particularly with cloud computing, many IT vendors ~ that had pretty much viewed the data centre as merely the physical location where some of their more precious equipment ends up ~ have put in a lot more effort. Companies like OHP, IBM and Cisco have been developing virtualisation and cloud technologies. But, as well as taking a much closer look at data centre aspects, one of the greater issues is with location.
“We have seen a lot of new locations emerging. We’re doing some work in Iceland as a location at the moment, and perhaps rather more interestingly from our side, we’re also doing some work for Trinidad and Tobago as a new location for data centres,” said BroadGroup’s MD, Steve Wallage, pictured.
Typically, data has gone to the big hubs where telecommunications are very strong. For example, in the Western European market around 70 percent of the built out data centre space has been in London, Frankfurt, Amsterdam or Paris. Even fairly large cities like Madrid or Milan have been marked as much smaller data centre locations.
Over the last couple of years other regions have started to be marketed as data centre hubs thanks to two main drivers: Firstly that the internet giants like Google, Amazon, Facebook, and Microsoft sought suitable alternatives, settling for ‘ middle- ofnowhere’ places in the US. As well as operational benefits, resulting from these diverse and remote locations, state owners also started offering specific incentives for tax and property. That model is now being copied around the world.
“More recently we’ve seen places like Iceland and Norway starting to promote themselves as low- cost power locations, as well as potential benefits if you go to certain locations outside the big cities. For example, if you don’t go to Stockholm, but opt to go further north in Sweden, there are a lot of incentives available,” said Wallage.
One of the big weaknesses has been the connectivity. If there aren’t decent telecoms or fibre choices, although these new locations have worked hard to develop themselves as data centre honey pots, it has still been necessary to build some new subsea cables. For example the Emerald Express links Ireland and the US, and also goes to Iceland, so suddenly Iceland becomes a lot more viable as a location. Google built in Finland and Facebook in Sweden, and with those investments came new fibre; adding more regions to the list of attractive data centre location choices.
“Increasingly, companies want to keep their data close, not just for security and privacy reasons but also for the decreased cost of getting it in and out of the data centre. They are looking for solutions, ~ and those solutions are very much on a regional basis.” Alex Rabbetts, MigSolv
Rabbetts’ in striking Norwich a company, chord in the with MigSolv, UK. the Because surrounding is a data of its centre location, oil and situated gas it’s industry operating just off the northern coast.
“A lot of our customers believed that connectivity into somewhere like Norwich was going to give them problems with latency and speed. But of course this is not the case.” He doesn’t think that connectivity is going to be an issue for remote locations such as oil
rigs and wind farms either. “At least nothing like it used to be generally with comms. Now you can get very low latency links in to remote locations much easier than you used to be able to, that’s a massive change,” he said. “If customers can’t get fibre into the
location they come to us and put in a microwave link. We’re fortunate as we have lots of roof space that we can use for satellite comms.”
Changing Service Based Models
A shift is headed towards data centre service based models. Historically once you moved into a data centre it was tough to move out. It required military style planning, down time strategies and risk management to relocate data assets. These days however, data centre customers are demanding a better level of service. While there are other aspects to the data centre SLA ( Service Level Agreements), they have traditionally been in place for a data centre manager and even the real estate/ facilities management. Today, data centre customer companies are looking to make those SLAs much more meaningful, including IT requirements such as application availability and the impact on the broader business ~ as opposed to just the management of the data centre building. “I think that it is true that data centres are being taken to task, and to a large degree I think it’s about time. I absolutely think that it’s right that data centre operators become more responsible,” said Rabbetts. “The whole shift is all about maintaining a relationship, and getting a better service in a more secure location. Customers are looking for environmental efficiency, they are looking for better service, they are looking for more security, lower comms costs; and all of those things are driving this shift towards choosing a more regional operator.”