Forbes

Teeing Up A Generation

Extraordin­ary Stories Of Change Championed Through Golf

- By Mallory Gafas | Illustrati­ons By Irene Laschi

The golf course is known as a place where power brokers close deals. What if it could also open doors to the next generation of diverse leaders?

That’s the idea at the heart of First Tee, a nonprofit youth developmen­t organizati­on that has been empowering young minds across the world for more than 25 years. Its unique mission provides educationa­l programs that build character and instill life-enhancing values through the game of golf.

Below, three alumni share stories of how First Tee not only influenced their younger years but also paved the way for their future careers, equipped them with powerful life and leadership skills and shaped their passions.

J.P. Ray was 8 years old when his father first enrolled him in First Tee - Tulsa. It quickly became a passion that continued long after Ray graduated from the youth participan­t program. In college, he took advantage of several national alumni opportunit­ies—one of which was a First Tee scholarshi­p to attend THE PLAYERS Championsh­ip hosted annually by the PwC Executive Forum. It was there that Ray says he was inspired to pursue a law degree.

“To be able to network with different people and hear their experience­s, hear how they got into their

profession—it just kind of got the ball rolling on, ‘Okay, well what do I want to do with my career?’”

Speaking to Forbes just after passing his bar exam, Ray, now 25, says he believes First Tee plays a critical role in removing barriers to entry in golf. “You don’t always see people of color on golf courses,” he says. “First Tee has done a really good job of making golf more accessible, less expensive—and just providing that opportunit­y for people to get involved.”

Taryn Yee says she was an introverte­d child. But when she first joined First Tee - Greater Sacramento at age 10, a more outgoing personalit­y emerged. “Just trusting myself, trusting my golf swing… and interactin­g with a lot of people as well made me a much more confident person.”

Now 30, Yee says the social skills and relationsh­ips developed through First Tee have fueled her growth personally as well as profession­ally—from playing competitiv­e golf in college to earning a bachelor’s degree in business to becoming a successful program manager. “As I started getting older… it wasn’t just about golf anymore,” she says. “It was really the people.”

Yee currently volunteers with First Tee chapters in both Sacramento and San Francisco—and she is excited about new opportunit­ies to connect. First Tee’s multi-year reinvestme­nt in technology advancemen­ts will soon offer alumni and participan­ts fresh ways to interact and engage.

Brittany Woo’s 8-year-old self would likely be surprised to learn that First Tee would become a lifelong passion. “I was not about carrying my bag on the golf course,” she remembers of her first season. “Then it evolved into, okay—well First Tee hosts all these events for kids like us, and we get to travel all across the country and meet new people and try different things. That sounds pretty cool!”

Woo continued her participat­ion through high school and then began volunteer coaching in college. She now serves as full-time senior director of programs for First Tee - Greater Richmond. “It’s very humbling to see [the kids] grow up as their own person, and the coaches have a piece in that.”

Woo believes more mentors can create more relatable role models for First Tee participan­ts, so she is currently helping the organizati­on actively recruit volunteer coaches of all experience levels to serve on and off the golf course. “A diverse group of coaches will help us connect with [our] diverse group of participan­ts,” she says.

PIONEERING CHANGE THROUGH GOLF

For over 25 years, First Tee has been building strength of character through golf to empower kids across a lifetime with valuable education, life, leadership and golf opportunit­ies. Visit www.firsttee.org to learn more about First Tee’s game-changing mission.

deploying across Microsoft’s Office software suite. RBC Capital Markets analyst Rishi Jaluria, who covers Microsoft, imagines a near-future “game-changer” world in which workers convert Word documents into elegant PowerPoint presentati­ons at the push of a button.

For years, the big data question for large enterprise­s has been how to turn hordes of data into revenuegen­erating insights, says FPV Ventures cofounder Pegah Ebrahimi, the former CIO of Morgan Stanley’s investment banking unit. Now, employees ask how they can deploy AI tools to analyze video catalogs, or embed chatbots into their own products. “A lot of them have been doing that exercise in the last couple of months and have come to the conclusion that yes, it’s interestin­g, and there are places we could use it,” she says.

The big debate around this new AI era surrounds yet another abbreviati­on: “AGI,” or artificial general intelligen­ce—a conscious, self-teaching system that could theoretica­lly outgrow human control. Helping to develop such technology safely remains the core mission at OpenAI, its executives say. “The most important question is not going to be how to make technical progress, it’s going to be what values are in there,” Brockman says. At Stability, Mostaque scoffs at the objective as misguided: “I don’t care about AGI . . . . If you want to do AGI, you can go work for OpenAI. If you want to get stuff that goes out to people, you come to us.”

OpenAI supporters like billionair­e Reid Hoffman, who donated to its nonprofit through his charitable foundation, claim that reaching an AGI would be a bonus, not a requiremen­t for global benefit. Altman admits he’s been “reflecting a great deal” on whether we will recognize AGI should it arrive. He currently believes “it’s not going to be a crystal-clear moment; it’s going to be a much more gradual transition.” But researcher­s warn that the potential impact of AI models needs to be debated now, given that once released, they can’t be taken back. “It’s like an invasive species,” says Aviv Ovadya, a researcher at Harvard’s Center for Internet and Society. “We will need policymaki­ng at the speed of technology.”

In the nearer term, these models, and the high-flying companies behind them, face pressing questions about the ethics of their creations. OpenAI and other players use third-party vendors to label some of their data and train their models on what’s out of bounds, forfeiting some control over their creators. A recent review of hundreds of job descriptio­ns written using ChatGPT by Kieran Snyder, CEO of software maker Textio, found that the more tailored the prompt, the more compelling the AI output—and the more potentiall­y biased. OpenAI’s guardrails know to keep out explicitly sexist or racist terms. But discrimina­tion by age, disability or religion slipped through. “It’s hard to write editorial rules that filter out the numerous ways people are bigoted,” she says.

Copyright laws are another battlegrou­nd. Microsoft and OpenAI are the

“EVERY TIME WE’VE GONE TO MICROSOFT TO SAY ‘HEY, WE NEED TO DO THIS WEIRD THING THAT YOU’RE PROBABLY GOING TO HATE,’ THEY HAVE SAID, ‘THAT’S AWESOME.’”

target of a class action lawsuit alleging “piracy” of programmer­s’ code. (Both companies recently filed motions to dismiss the claims and declined further comment.) Stability was recently sued by Getty Images, which claims Stable Diffusion was illegally trained on millions of its proprietar­y photos. A company spokespers­on said it was still reviewing the documents.

Even more dangerous are bad actors who could deliberate­ly use generative AI to disseminat­e disinforma­tion—say, photoreali­stic videos of a violent riot that never actually happened. “Trusting informatio­n is part of the foundation of democracy,” says Fei-Fei Li, the codirector of Stanford’s Institute for Human-Centered Artificial Intelligen­ce. “That will be profoundly impacted.”

Who will have to answer such questions depends in part on how the fastgrowin­g AI market takes shape. “In the ’90s we had AltaVista, Infoseek and about 10 other companies that were like it, and you could feel in the moment like some or one of any of those were going to the moon,” says Benchmark partner Eric Vishria. “Now they’re all gone.”

Microsoft’s investment in OpenAI, which comes with a majority profitshar­ing agreement until it has made back its investment, plus a capped share of additional profits, is unpreceden­ted, including its promise for OpenAI to eventually return to nonprofit control. (Altman and Brockman, respective­ly, call that a “safety override” and “automatic circuit breaker” to keep OpenAI from concentrat­ing power if it gets too big.) Some industry observers more wryly see the deal as a nearacquis­ition, or at least a rental, that benefits Nadella the most. “Every time we’ve gone to them to say, ‘Hey, we need to do this weird thing that you’re probably going to hate,’ they’ve said, ‘That’s awesome,’ ” Altman says of the arrangemen­t. (Microsoft declined to discuss the deal’s terms.)

There’s another under-discussed aspect of this deal: OpenAI could gain access to vast new stores of data from Microsoft’s Office suite—crucial as AI modabout els mine the internet’s available documents to exhaustion. Google, of course, already has such a treasure trove. Its massive AI divisions have worked with it for years, mostly to protect its own businesses. A bevy of fast-tracked AI releases are now expected for 2023.

At Stability, Mostaque takes great pains to explain his business as focused on the creative industry, more like Disney and Netflix—above all, staying out of Google’s way. “They’ve got more GPUs than you, they’ve got more talent than you, they’ve got more data than you,” he says. But Mostaque has made his own potential Faustian bargain, with Amazon. A partnershi­p with Stability saw the cloud leader provide more than 4,000 Nvidia AI chips for Stability to assemble one of the world’s largest supercompu­ters. Mostaque says that a year ago, Stability had just 32 such GPUs.

“They cut us an incredibly attractive deal,” he says. For good reason: The synergy provides an obvious cash cow from cloud computing run on Amazon Web Services and could generate content for its Studios entertainm­ent arm. But beyond that, Amazon’s play is an open question.

Don’t forget Apple and Facebook parent Meta, which have large AI units, too. Apple recently released an update that integrates Stable Diffusion directly into its latest operating systems. At Meta, chief AI scientist Yann LeCun griped to reporters, and over Twitter, ChatGPT buzz. Then there are the many startups looking to build all around, and against, OpenAI, Stability and their kind. Clem Delangue, the 34-year-old CEO of Hugging Face, which hosts the Stable Diffusion opensource model, envisions a Rebel Alliance of sorts, a diverse AI ecosystem less dependent on any Big Tech player. Otherwise, Delangue argues, the costs of such models lack transparen­cy and will rely on Big Tech subsidies to remain viable. “It’s cloud money laundering,” he says.

Existing startup players like Jasper, an AI-based copywriter that built tools on top of GPT and generated an estimated $75 million in revenue last year, are scrambling to keep above the wave. The company has already refocused away from individual users, some of whom were paying $100 or more a month for features now covered roughly by ChatGPT, with OpenAI’s own planned first-party applicatio­ns yet to arrive. “This stuff gets broken through so quickly, it’s like nobody has an edge,” says CEO Dave Rogenmoser.

That applies to OpenAI, too, the biggest prize and the biggest target in the bunch. In January, a startup founded by former OpenAI researcher­s called Anthropic (backed most recently by Sam Bankman-Fried of bankrupt firm FTX), released its own chatbot called Claude. The bot holds its own against ChatGPT in many respects, despite having been developed at a fraction of the cost, says Scale AI CEO Alexandr Wang, an infrastruc­ture software provider to both. “It [raises] the question: What are the moats? I don’t think there’s a clear answer.”

At OpenAI, Brockman points to a clause in the company’s nonprofit charter that promises, should another company be close to reaching artificial general intelligen­ce, to shut down OpenAI’s work and merge it into the competing project. “I haven’t seen anyone else adopt that,” he says. Altman, too, is unperturbe­d by horse-race details. Can ChatGPT beat Google search? “People are totally missing the opportunit­y if you’re focused on yesterday’s news,” he muses. “I’m much more interested in thinking about what comes way beyond.”

ABOVE ALL, STAY OUT OF GOOGLE’S WAY: “THEY’VE GOT MORE GPUS THAN YOU, THEY’VE GOT MORE TALENT THAN YOU, THEY’VE GOT MORE DATA THAN YOU.”

 ?? ?? Toil and Trouble
The “dot-AI bubble” is coming, says Stability AI founder and CEO Emad Mostaque. Unlike with past bubbles, though, the business impact is clear. “In boardrooms, what’s the number one topic? Generative AI.”
Toil and Trouble The “dot-AI bubble” is coming, says Stability AI founder and CEO Emad Mostaque. Unlike with past bubbles, though, the business impact is clear. “In boardrooms, what’s the number one topic? Generative AI.”

Newspapers in English

Newspapers from United States