Things with which this World Cup is laden so far: Geopolitical intrigue and controversy. Messy soccer-world drama. Improbable first-half England goals.
And, of course: A slate of hyped-up artificial intelligence applications.
FIFA is touting now AI-powered decision-making system that will use sensors in the actual soccer ball to help determine calls. A vast network of facial recognition-enabled cameras will track the crowd, with technology in the same family as that deployed by the controversial firm Clearview AI. AI-powered sensors in the stadiums will even help control the climate.
Which all sounds very cool. But it also raises the question — is all that really “AI”? And if it is, how is it possible that the same technology is powering such a disparate slate of applications, not to mention generating surreal artor prefab legal documents?
In one sense, the AI hype around this World Cup is just a marketing push by the host country and organization. Qatar prides itself on having used its (relatively) newfound natural-gas fortune to power it into the ranks of other wealthy gulf states like Saudi Arabia and the UAE, and FIFA has aggressively played up its high-tech additions to the game.
This buzzy invocation of AI is the flip side of the anxiety that has been rising around the technology among industry watchdogs. Both ways of thinking about AI tend to conflate different issues into one big topic. And they all both point to a larger question: How is the public supposed to think about AI?
One reason that matters, a lot, right now: Politics have finally discovered AI. The Biden administration is attempting to nudge the field towards its preferred values and practices AI Bill of Rights. Europe is doing the same, but with statutory teeth. Governments are moving to regulate AI at a pace that’s slower than the technology itself is developing, but faster than the layperson’s understanding of it. That poses a political problem, as the marketing “wow factor” around AI increasingly obscures how it actually works and impacts our lives, leaving the public relatively clueless in the face of the regulatory decisions being made.
“If the yellow first-down line in football appeared today rather than in 1998, they’d say it was generated by AI,” said Ben Recht, a professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley who has written extensively on AI and machine learning. “AI has become nothing more than a marketing term to mean ‘things we do automatically with computers.'”
The history of what artificial intelligence actually is is might be beyond the scope of this afternoon’s newsletter. The mathematics and computing historian Stephanie Dick described the term’s long semantic drift in a 2019 essay for the Harvard Data Science Review that focused on the field’s roots in computer-powered attempts to model human intelligence. As the field drifted away from that effort and towards powerful machine-learning systems like those that power DALL-E or GPT-3the initial branding has stuck, obscuring those systems’ actual functions behind a fog of hype and sci-fi speculation about sentient machines or human-like “general artificial intelligence.”
We’ve now come to use AI as a basket term for, as computer scientist Louis Rosenberg put it when I talked to him, “processing massive datasets, finding patterns in those datasets, and then using those patterns to make predictions or draw insights.”
When you put it that way, AI’s application to a soccer ball or an AC system is (slightly) demystified. But that only scratches the surface of how those machine-learning systems are insinuating themselves into our lives. The policy discourse around AI right now focuses on much more high-stakes issues like systemic bias creeping into decision-making systems, or unchecked facial-recognition surveillance like that being deployed in Qatar right now, or data harvesting without consent.
Those are the kinds of issues that show up in the Biden administration’s new AI policy, but there’s still a massive gulf in understanding between policymakers and the public on the issue. A Stanford report written last year noted that “accurate scientific communication has not engaged a sufficiently broad range of publics in gaining a realistic understanding of AI’s limitations, strengths, social risks, and benefits,” and that “Given the historical boom/bust pattern in public support for AI, it it is important that the AI community does not overhype specific approaches or products and create unrealistic expectations” — a dynamic likely not helped by the World Cup hype machine.
And while guidelines like the Biden administration’s might be useful, they’re still… just guidelines. There are still few, if any, laws in place to prevent the kind of AI-induced harms that might be perpetuated under the radar amid a general haze of curiosity and misunderstanding — which makes public understanding of the tech far more important than one might at first thought.
“First, AI isn’t some form of magic and, second, that we aren’t on a predetermined path with regard to where the technology is headed and what we do with it,” Maximilian Gahntz, senior policy researcher at the Mozilla Foundation , told me. “As consumers, people go to vote with their feet if they have the necessary information to make informed choices about products and services that use AI. And as voters, people can push for tech companies and those deploying AI to be held accountable.”
Yet another gee-whiz use for AI: time travel.
Well, sort of. The writer and game designer Merritt K is currently crowdfunding a book called LAN Party, a coffee-table photo book with a goal that blends tech history and tech’s future: Using the image upscaler Gigapixel AI to restore and enhance photos of 1990s-era computer gaming sessions that brought gamers together to network their computers in person before the advent of online gaming.
The photos reveal a bygone age of computing that was decidedly from the one we inhabit today: In addition to the cultural accouterments of 1990s-era nerd-dom, the photos reveal, as Merritt said in an interview with Ars Technica“a sheer anarchy of cases, desktop layouts, and diverse approaches to building.”
“Some people might say, ‘Oh, this is just a bunch of idiots having fun.’ But that’s a lot of what culture, what human history is, though, idiots having fun,” she told Ars Technica as well — and the point is well taken given the extent to which gaming has driven some of the graphical developments of the 21st century , including powering the development of the metaverse. And now that AI is powerful enough to help archivists uncover the past, as well, the implications go beyond just Merritt’s Clinton-era gaming history: Gigapixel has been used to enhance historical moments in film swear restoring and enhancing pre-color snapshots.
Who’s afraid of Gary Gensler?
POLITICO’s Declan Harty poses the question today, reporting on how the ambitious SEC chair is using this near-apocalyptic moment for the crypto world to bolster his regulatory agenda. The FTX collapse has gravely endangered a bill backed by FTX founder Sam Bankman-Fried that would have put crypto under the purview of the Commodity Futures Trading Commission, a move largely seen as more favorable to the industry than shifting responsibility towards Gensler’s SEC.
The balance of power might soon swing back toward the SEC. An anonymous source told Declan that “the SEC has been encouraging crypto exchanges to register with the agency on a voluntary basis because officials want to avoid litigation with a large segment of the industry they believe is breaking the rules,” and that “the agency will likely start bringing enforcement actions against digital asset exchanges in 2023, given that it takes about two years to build a case.”
That ambition, however, comes with its own pushback. Declan reports that the SEC itself has been roiled by the enhanced workload and a push to return to the office, and the crypto industry isn’t likely to take the threat of more intense regulation lying down — Kristin Smith, president of the industry group the Blockchain Association, told us in a statement that the SEC’s reported plans are “nothing new” and pose “a threat to the United States’ lead in the global race to capitalize on the digital assets economy.”
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); swear Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.