In most industries today global competition thrives. And typically within each market in each industry there are leaders, challengers and often multiple niche players who can all eke out a good living. For example in the non-alcoholic beverage business, market leaders Coca-Cola and Pepsi have competed vigorously for more than a century. Despite this, both continue to be very profitable global enterprises, each with a market value of more than US$100 billion.
But in online global markets, the picture is quite different. For example, in the market for social media, one company, MySpace was the clear global leader in 2006 until its rival Facebook gained momentum and overtook it in less than two years. Once ahead Facebook went on to vanquish its rival and command almost complete control of the entire category, creating the first and only US$100 billion player in social media.
Confronted by the power of huge online companies, governments should emulate them and use the internet to collaborate on far-reaching policy issues.
Global power is shifting from a society of nation states towards a society of global online companies. Many of the traditional roles of government such as the redistribution of wealth, the creation of public infrastructure and competition regulation are being tested by a growing constellation of planetary-sized online businesses such as Apple, Facebook, Alibaba, Tencent, Google and Amazon.
Even in that most rarefied role of the public sector – intelligence, government's potency and efficacy is now in question. In the latest 007 movie Spectre, James Bond's rival codenamed "C" says of their new impressive headquarters: "Her majesty's government wouldn't have the money to fund this – it's funded by a benefactor." And in the 2014 film Kingsman (which is intended to be a spy movie franchise) the central idea is one of a privately funded secret service, its founders having long given-up on the idea such an organisation could spring from and be run effectively in public hands.
Photocredit: Daniel Craig at Film Premiere "Spectre" 007 - on the Red Carpet in Berlin by Glyn Lowe.
While Kodak was an iconic and hugely successful global consumer products company - mentions in all printed books English language books never outnumbered the US Government — even during its heyday of the 1970s.
In less than a decade, Google on the other hand has overtaken the US Government in mentions in English language books in its brief but spectacular ascent to become one of the worlds most influential online giants.
Rocketing regions: the jobs of the future in gazelle headquarters
Do you know someone who has lost their job in the last few years working in IT, media, finance or retail? These industries and many others are already feeling the pinch of “online gravity” - a special set of economic forces and drivers that increasingly govern business in the age of the web.
Much has been made of the disappearance of jobs due to the digitisation, automation and networking of many traditional industries — most notably in traditional media. But careful global economic analysis has shown the internet has in fact added more jobs than it has destroyed.
According to McKinsey and Company the internet has created 2.6 new jobs for every 1 deleted. What’s becoming increasingly apparent however is the location and setting forwhere these new jobs appear is often not the same for those which were lost.
Online, business today is being influenced by a different set of economic forces than those that exist purely offline. I call these forces “online gravity” – not unlike the forces that led to the formation of our solar system. These forces favour the creation of planet-like superstructures with lots of white space in-between. In a former article (Why there’s no Pepsi® in cyberspace) I outlined this phenomena and here I examine how online gravity is reshaping the future of work.
The new technologies needed for dealing with big data
While much focus and discussion of the so-called “Big Data revolution” has been on the data itself and the exciting new applications it is enabling — from Google’s self-driving cars through to CSIRO and University of Tasmania’s better information systems for oyster farmers — less focus has been on the underpinning technologies and the talent driving these technologies.
At the heart of the Big Data movement is a range of next generation database technologies that enable data to be amassed and analysed on a scale and speed hitherto unseen.
Global online services such as Google, Amazon and Facebook that serve billions of people around the world in real time have been made possible due to new technologies that divide tasks and files across banks of thousands of distributed computers.
Online marketplaces, also known as platform companies, are sprouting up everywhere and redefining business in every industry. “The Uber of ….” has become shorthand for tech startups looking to redefine the way everything is delivered, from legal services (Sydney-based LawPath) to Package deliveries (San Francisco-based Doorman), to Lottery services (Gibraltar-based Lottoland).
Paris-based Videdressing offers global aftermarket luxury branded fashion and Los Angeles-based DogVacay is an Airbnb-style online marketplace for dog vacations that has created a network of more than 20,000 pet sitters. It has raised more than US$45 million from investors.
Major online marketplaces are attracting the attention of leading technology investors. Last year Sydney-based Expert360, the global marketplace for consulting talent, attracted A$4 million; Artsy - the NY based global marketplace for artwork - closed US$25 million, and Shyp the San Franscisco based on-demand shipping services marketplace finalised another $US50 million in investment.
There are currently 5,723 early stage private online marketplace companies listed on AngelList the leading online marketplace for investors in early stage technology startups. The average valuation is US$4.5 million - so that is about US$25 billion worth of early stage startups in this area.
The Rise of Unconventional Data
Read the full version of this article as originally published in FastCompany here.
The Rise of Unconventional Data
One of the lesser understood aspects of what you can do with massive stockpiles of data is the ability to use data that would traditionally have been overlooked or in some cases even considered rubbish. This whole new category of data is known as "exhaust" data—data generated as a by-product of some other process.
Much financial market data is a result of two parties agreeing on a price for the sale of an asset. The record of the price of the sale at that instant becomes a form of exhaust data. Not that long ago, this kind of data wasn’t of much interest, except to economic historians and regulators.
A massive moment-by-moment archive of prices of shares and other securities sales prices is now key to many major banks and hedge funds as a "training ground" for their machine-learning algorithms. Their trading engines "learn" from that history and this learning now powers much of the world’s trading.
Traditional transactions such as house price sales history or share trading archives are one form of time-series data, but many other less conventional measures are being collected and traded too.
There are also other categories of unconventional data that are not time-series-based. For example, network data outlines relationships and other signals from social networks, geospatial data lends itself to mapping, and survey data concerns itself with people’s viewpoints. Time series or longitudinal data is, however, the most common form and the easiest to integrate with other time-series data.
Location data from mobile phones means many companies now have people-movement data. [Photo: via The Conversation, Flickr user Andrew Hyde]
Consistent Longitudinal Unconventional Exhaust Data or CLUE data sets, as I’m calling them, are many, varied and growing. They include:
Say, for example, you are interested in the seasonal profitability of supermarkets over time. Foot traffic data may not be the cause of profitability, as more store visitors doesn’t necessarily correlate directly to profit or even sales. But it may be statistically related to volume of sales and so may be one useful clue, just as body temperature is a good clue or one signal to a person’s overall well-being. And when combined with massive amounts of other signals using data analytics techniques, this can provide valuable new insights.
RISE OF "QUANTAMENTAL" INVESTMENT FUNDS
Leading hedge fund BlackRock, for example, is using satellite images of Chinataken every five minutes to better understand industrial activity and to give it an independent reading on reported data.
Traditionally, there have been two main types of actors in the financial world—traders (including high-frequency traders), who look to make money from massive volumes on many small transactions, and investors, who look to make money from a smaller number of larger bets over a longer time. Investors tend to care more about the underlying assets involved. In the case of company stocks, that usually means trying to understand the underlying or fundamental value of the company and future prospects based on its sales, costs, assets, and liabilities and so on.
Aerial photography from drones and new low-cost satellites are one key new source of unconventional data.[Photo: Flickr user BxHxTxCx]
A new type of fund is emerging that combines the speed and computational power of computer-based quants with the fundamental analysis used by investors: Quantamental. These funds use advanced machine learning combined with a huge variety of conventional and unconventional data sources to predict the fundamental value of assets and mismatches in the market.
Some of these new style of funds, including Two Sigma in New York andWinton Capital in London, have been spectacularly successful. Winton was founded by David Harding, a physics graduate from Cambridge University in 1997. After less than two decades it ranks in the top 10 hedge funds worldwide with US$33 billion in assets under advice and more than 400 people—many with PhDs in physics, math, and computer science. Not far behind and with US$30 billion in assets, Two Sigma also glistens with top tech talent.
New ones are emerging too, including Taaffeite Capital Management, run by computational biology and University of Melbourne alumnus Professor Desmond Lun. Understanding the complex data dynamics of many areas of natural science, including biology and ecology, is turning out to be excellent training for understanding financial market dynamics.
WEIRD DATA FOR ALL
But it’s not only the world’s top hedge funds that can or are using alternative data. A number of startups are on a mission to democratize access to new sources. Michael Babineau, cofounder and CEO of Bay Area startup Second Measure, aims to offer a Bloomberg-terminal-like approach to consumer purchase data. This will transform massive amounts of inscrutable text in card statements into more structured data, thus making it accessible and useful to a wide business and investor audience.
Others companies, like Mattermark in San Francisco and CB Insights in New York, are intelligence services that provide fascinating and valuable data insights into company "signals." These can be indicators and potential predictors of success—especially in the high-stakes game of technology venture capital investment.
Akin to Adrian Holovaty's pioneering work a decade ago mapping crime and many other statistics in Chicago online, Microburbs in Sydney provides a granular array of detailed data points on residential locations around Australia. It allows potential residents and investors to compare schooling, restaurants, and many other amenities in very specific neighborhoods within suburbs.
We Feel, designed by CSIRO researcher Cecile Paris, is an extraordinary data project that explores whether social media—specifically Twitter—can provide an accurate, real-time signal of the world’s emotional state.
We Feel is a research tool that creates "signals" data about the emotional mood of people around the world via their tweets.[Photo: via The Conversation, CSIRO]
WEIRD SMALL DATA HAS ITS BENEFITS AND ITS RISKS
More than simply pop-economics, Freakonomics (2005) showed how unusual yet good-quality data sources can be valuable in creating insights. Assiduous record-keeping of the accounts of an honesty system cookie jar in an office place revealed that people stole most during certain holidays (perhaps due to increased financial and mental stress at these times); access to drug gangster bookkeeping accounts explained why many drug dealers live with their grandparents (they are too poor to move out); and massive public school records from Chicago showed parental attention to be a key factor in students' academic success.
Many of the examples in Freakonomics were based on small quirky data samples. However, as many academics are aware, studies with small samples can present several problems. There’s the question of sampling—whether it’s large enough to represent a robust sample and whether it’s a random selection of the population the study aims to understand.
Then there’s the problem of errors. While one could expect errors to be smaller with smaller sample sizes, a recent meta-study of academic psychology papers found half the papers tested showed significant data inconsistencies and errors. In a small number of cases this may be due to authors fudging the results, whereas others may be due to transcription or other simple mistakes.
WEIRD DATA IS GETTING EASIER TO FIND
More and more large-scale unconventional data collections are becoming readily available. There are three blast furnaces driving its proliferation:
the interaction furnace: our own growing interactions with the web and web services (e-commerce, web mail, social media) etc.
the transaction furnace: the increasingly online ledger of commerce.
the automation furnace: an explosion of web-connected sensors.
While large data collections can’t help with avoiding fabrication, they can sometimes help with sample size and representation issues. When combined with machine learning they can:
provide accurate insights from incomplete, noisy, and even partially erroneous data.
offer associations, patterns and connections—blindly with no a priori assumptions.
help eliminate bias—by invoking multiple perspectives.