When talking trust, go with your gut
It plays bigger role in first impressions
The last time I connected with a friend online, a funny thought occurred to me: It wasn’t really a friend at all. Instead, I was talking through and to a computer. This wasn’t a particularly profound insight. But when it comes to trust, how you communicate has become more important than who you are communicating with.
You may not realize it, but our guts play a bigger role in deciding whom to trust than our brains. Your brain decides on trust much more quickly than you think. When we encounter a stranger, our brains automatically decide whether we can trust him or her. And it’s done subconsciously and extremely quickly — in milliseconds, in fact — in the part of the brain called the amygdala.
From a survival perspective, it’s important to be able to identify who is trustworthy and who is not, and numerous experiments have shown that our first impressions about someone are usually right. Within milliseconds, we are good at determining whether people are friendly, intelligent, wealthy and numerous other traits. If your genes have made it this far — and all of ours have — it’s because you’re very good at picking out who to trust.
Our brains are less wellequipped to decide whether a company is trustworthy, but there’s evidence to suggest that generally, we give companies the benefit of the doubt, which holds right up until they do something we don’t like. It is even worse with technology companies, as we tend to look past the company and base our decisions on the usefulness of the product.
Enter Facebook, which is in some hot water these days. We have been willingly trusting Facebook for years, sharing gobs of personal data, knowing Facebook uses that data to target us with ads. We have become pretty comfortable with the arrangement: keep up with friends and enjoy funny cat videos with no cost except having to view the occasional sponsored post.
But recently, we’ve discovered Facebook data has been used in less benign ways. Facebook’s partners are after your mind, and they’ve learned that if you’re angry and fearful, you’re more engaged, since negativity has a bigger impact on the brain than positive messages. Remember the amygda- la — it is also the response center for fear and anger.
Facebook has the same knowledge about you as your friends, so it knows exactly what makes you angry or anxious. That gives companies who buy data from Facebook the ability to target certain segments of the population with negative messages. Add bots and trolls to the mix, plus intentionally fake content created by hostile foreign powers, and it’s clear how dangerous this can be.
We may not mind our data being used to sell us stuff, but no one is OK with data being used in nefarious ways. Hence a pending congressional investigation and many loud calls for a mass Facebook boycott. Facebook has lost our collective trust, and what it does next will be critical in determining whether it can gain it back. But the problem is bigger than Facebook: You can swap Facebook out for any technology that collects user data, and the problem will persist.
❚ What’s the solution? There is a simple way forward: Create absolute transparency. Tech providers should label all posts, users or feeds that have received data about its users. It’s as easy as one sentence presented alongside any post that leverages user data, such as “this product is recommended based on your clicks on our site” or “this news post was sent to you because you are friends with Vladimir Putin.”
A recent study out of Harvard Business School suggests this approach may even be beneficial to advertisers. The experiments demonstrated that when companies clearly communicate how they are using customer data, customers show increased engagement and actually make more purchases. So, the user gets transparency and the company sees increased engagement — a win-win for all but those errant data violators.
It is unlikely we will be able to curb the sale of personal data, but we can control how it is used. By obligating disclosures, all posts from sources that received data would forever be tagged with a disclaimer. Facebook and other data providers would lose money from bad actors, but they would gain the trust of the good ones.
But until the majority of companies join the transparency bandwagon, it pays to be vigilant. Recognize that while your brain can instantly spot a trustworthy face, it takes a lot more work to determine whether to trust an ad, an article or a website. If something appears sensational, misplaced or like it fits a bit too perfectly into your world view, it may just be the content was written solely to manipulate you. In this era of fake news and data misuse, your best bet is to approach all online content with a healthy dose of skepticism.
Jeff Stibel is vice chairman of Dun & Bradstreet, a partner of Bryant Stibel and an entrepreneur who also happens to be a brain scientist. He is the bestselling author of “Breakpoint” and “Wired for Thought.” Follow him on Twitter at @stibel.
The views and opinions expressed in this column are the author’s and do not necessarily reflect those of USA TODAY.
We may not mind our data being used to sell us stuff, but no one is OK with data being used in nefarious ways.