National Post (National Edition)

Canada’s great big data map

- Ian Macgregor is founder, president, CEO and chairman of North West Refining. To read the full essay and the entire series, visit www.cigionline.org/data.

Big data is usually associated with Facebook, not farming. But as Ian Macgregor writes, harnessing data from agricultur­e, mining, forestry and other primary industries could be the next big economic opportunit­y in Canada — if we do it right. This is part of a series of excerpts from essays commission­ed by the Centre for Internatio­nal Governance Innovation.

David Thompson travelled approximat­ely 90,000 kilometres across North America by foot and canoe in the late 1700s and early 1800s. He used the data he collected to create what he called the “great map”— and in the process unlocked the commercial potential of North America.

Big data is as important to Canada in the 21st century as Thompson’s topographi­cal data was in the 19th century. According to IBM, in 2013, 2.5 exabytes of data was generated daily. This data comes from everywhere: sensors used to gather shopper informatio­n or industrial machinery performanc­e; posts to social media sites; digital pictures and videos; purchase transactio­ns; and cellphone global positionin­g system signals, to name a few.

It is obvious that the combinatio­n of big data with modern machine learning will unlock new commercial opportunit­ies and significan­tly reduce the environmen­talimpacts of canada’ s biggest industries through continued optimizati­on and by identifyin­g and solving new problems and challenges.

Canada has led the world in the primary industries: mining, energy, forestry and agricultur­e. But for the most part, the focus has been on digging, cutting and planting followed by selling after primary processing. The opportunit­y — the “big idea” — is to enable transforma­tive change by collecting and cataloguin­g the right data for rapid applicatio­n of machine learning and artificial intelligen­ce (AI) to these industries.

The sensors to collect data — the expensive part — are already in place. But so far, efforts in big data in the primary industry space have been by dominant industrial players collecting proprietar­y data to improve their competitiv­e position. For example, John Deere collects self-driving tractor informatio­n, and GE collects gas turbine maintenanc­e informatio­n.

What is missing is a vision for public accessibil­ity of big data (with suitable authorizat­ion access to ensure appropriat­e protection­s for security and privacy) to accelerate and unleash broader, continuous, cross-sectoral innovation. To produce broad benefits for Canadians, this data must be intelligen­tly organized and stored, and made available on an opensource basis, like the libraries of old.

Canada is well positioned to take a leadership role in the creation of such a library, by bringing together the know-how it has fostered in its primary industries and its emerging leadership in machine learning and data science.

Canada has an historical example that is unique in the world regarding the success of an open-source library for primary industry.

The transfer of mineral rights from the federal government to Alberta after the discovery of the Turner Valley oil field south of Calgary in 1914 led to the establishm­ent of what has now become the Alberta Energy Regulator (previously called the Energy Resources Conservati­on Board). One of the board’s first actions was to require public reporting of key attributes of production, geology and reservoir performanc­e, which formed the basis for a comprehens­ive historical library on Alberta’s resources. Everything related to well performanc­e and reservoir is recorded and becomes public after a one-year period following drilling.

An unintended, but beneficial, consequenc­e of this early idea for public reporting and archived informatio­n was to lower barriers to entry for oil industry entreprene­urs. Free public access to what had traditiona­lly been proprietar­y data spawned large-scale resource developmen­t in a competitiv­e environmen­t in Alberta that continues to this day.

The public model developed in Alberta has now been recognized as a key enabler of the rapid and continuing entreprene­urial developmen­t in Alberta as well as a best-in-class model for petroleum resource regulation in other areas of the world.

By collecting the raw data and making it open source, new big data businesses will be built and sustained in Canada, enticed by the three essential ingredient­s for success: the right type of data; the subject matter experts who can help identify pressing problems; and a large domestic market.

If the organizati­onal structure is developed to link young Canadian big data profession­als with Canada’s deep industry expertise and support them to found new enterprise­s, primary industry in Canada and around the world can be revolution­ized.

These young profession­als can draw canada’ s next great map.

Concurrent­ly, a big data entreprene­urial ecosystem system must be developed that will encourage Canada’s best and brightest to pursue these data-driven opportunit­ies at home, rather than leaving for opportunit­ies south of the border. This ecosystem should provide managerial support for new datadriven businesses, together with small amounts of capital for new ideas that have merit.

The train has left the station, however, so we have to get moving—or a promising public economic engine will end up as a merely private profit motor.

 ??  ??

Newspapers in English

Newspapers from Canada