Business Standard

Identifyin­g bots

A new California law could have far-reaching consequenc­es

-

On September 28, the state of California in the United States amended its Business & Profession­s Code to make it mandatory for automated accounts, or bots as they are known, to declare their non-human identity. Under the new law, bots cannot pretend to be real people in order to “incentivis­e a purchase or sale of goods or services in a commercial transactio­n, or to influence a vote in an election”. The disclosure­s must be “clear, conspicuou­s, and reasonably designed”, which means that this cannot be hidden in the depths of an end-user licence agreement. It would have to be stated upfront in a bot’s Twitter bio or Facebook profile. This provision, which will become effective on July 1, 2019, could have far-reaching consequenc­es. As things stand, however, it applies only to platforms with over 10 million unique visitors a month but still this provision covers the large social media platforms and e-commerce sites, and also a host of financial service sites and utilities.

The new law has some obvious commercial applicatio­ns in that it should stop automated spam calls and emails or, at the least, make it obvious when a client or potential customer is being harassed by a silicon entity. It should also put a stop to unethical marketing tactics, such as a product or service being “endorsed” by bot armies. This is common enough with Ponzi schemes. However, the law would not impede the legitimate use of bots and Artificial Intelligen­ce agents by utilities, or e-commerce sites, for example, to garner customer feedback, process queries, or conduct surveys. Nor would it delegitimi­se the use of bots to issue weather reports, or earthquake warnings, or catalogue search results.

The real utility of this law might lie in the realm of sanitising election campaign processes. One of the most common modes of amplifying fake news and manipulati­ng opinion with a political context is via bot-driven abuse. By setting up a bot army to “like” and “retweet” fake news, or to “like” and link to Facebook pages, it is possible to increase the range of propagatio­n as well as to create an illusion of high engagement and credibilit­y for fake news. This tactic is used across the world by many mainstream political parties as well as by some of the more radical terror groups. Bots have also been deployed extensivel­y by “influencer­s” such as those bad actors who sought to sway the 2016 US Presidenti­al elections and the Brexit Referendum.

The scale of bot usage for such malicious purposes is huge. Twitter claims it removes close to 10 million bots per week for “potentiall­y spammy behaviour” and is said to be considerin­g labelling automated accounts anyhow. A law like this could lend teeth to such efforts. It would also force Facebook and other social media platforms to take similar actions to identify and label bots.

Obviously, a law that only applies to California doesn't have a great deal of traction. However, if this law is successful at reducing bot abuse in that state, it could turn into a model for others across the world. If it works in California, which is at the “cutting edge of the cutting edge” in terms of tech usage, it should work elsewhere.

From a more long-term perspectiv­e, such a law will soon be necessary anyhow. Google has demonstrat­ed how its Duplex AI can make restaurant reservatio­ns and airline bookings by imitating human speech patterns complete with pauses and hesitation­s, on voice calls. It would not be difficult to program an AI agent to imitate the voice of some well-known personalit­y to carry out “spoofs”. As the scope for such “spoofs” increases, a law on these lines will become imperative.

Newspapers in English

Newspapers from India