Elon Musk’s xAI has a Tennessee-size pollution problem on its hands

3 months ago 23

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Trouble for Elon Musk in Memphis As xAI ramps up operations at its new data center in Memphis, environmental advocacy groups are crying foul about the AI startup’s pollution problem. The Southern Environmental Law Center sent a letter this week to the Health Department in Shelby County, requesting a probe into the Elon Musk-founded company’s unpermitted use of natural-gas-burning turbines at its data center. The letter accuses xAI of installing “at least 18 gas combustion turbines” in recent months, producing a combined capacity of about 100 megawatts, or “enough electricity to power around 50,000 homes.”  XAI first announced the facility back in June, shortly after it raised $6 billion in Series B funding. Musk said in a post on X (formerly Twitter) last month that xAI was training its AI model—dubbed Grok—at the data center using 100,000 Nvidia H100 processors. While the company has been using the turbines to power the facility, it is in the process of transitioning to power from the Memphis Light, Gas and Water (MLGW) and the Tennessee Valley Authority. In fact, CNBC reports that MLGW provides 50 megawatts of power to xAI, but the facility still requires another 100 megawatts—which is where the turbines come in. The letter requests that the county health department order xAI to “cease operations until they obtain a permit,” much as the agency did with Planter’s Gin in 2021. Memphis already ranks poorly when it comes to air pollution. Shelby County received an “F” grade in the American Lung Association’s smog rankings. For Musk, it’s just the latest in a string of environmental controversies. Another recent CNBC report found that SpaceX discharged industrial wastewater at its Boca Chica, Texas, launch site without a permit. Meanwhile, the billionaire’s tunneling startup, The Boring Company, was fined by regulators in Texas for a wastewater violation of its own, and Tesla was ordered earlier this year to curtail toxic emissions from its Fremont, California, electric car factory. Harrowing new numbers behind the deepfakes crisis A new report from the nonprofit Thorn raises red flags about children’s online safety in the AI era. Partnering with the research firm BSG, Thorn surveyed 1,040 minors between the ages of 9 and 17 over a monthlong period in 2023, asking questions about “harmful online experiences” and explicit materials. The researchers found that one in 10 minors know a friend or classmate who’s used AI tools to create nudes of other children. “Online spaces provide valuable opportunities for young people to explore and connect, but they also pose very real risks,” the report states. “For many young people, potentially harmful online experiences have become an inevitable component of their digital lives.” As 404 Media points out, Thorn has come under fire in the past for its at-times alarmist framing of issues related to minors. In the report, the organization says one in seven minors have shared their own “self-generated child sexual abuse material”—which, while illegal, can refer to consensual images sent between minors. Such instances may be cause for concern for parents, but they present a far different scenario from one in which a nude deepfake is created and disseminated without a person’s knowledge. (Separately, Thorn has been pilloried by privacy experts for giving police a tool that collects sex workers’ ads into a database; and Thorn founder Ashton Kutcher resigned as board chair last year after sparking outrage for his support of convicted rapist Danny Masterson, Kutcher’s That ‘70s Show costar.) Still, the report is useful for gauging how widespread AI tools have become among minors, and it should certainly raise alarm bells that 10% of respondents knew someone who used AI tools to create nonconsensual images of peers. The report will likely put further pressure on Silicon Valley to better police against child sexual abuse material (CSAM). In April, tech giants including Meta, Google, and OpenAI signed onto standards created by Thorn that promised to develop better watermarking features, and to exclude CSAM from any training datasets for AI models. But that pledge didn’t clarify how exactly companies will impose those standards, and in fact a number of the pledge signatories continue to enable AI-generated harmful images. Meanwhile, Congress continues to weigh a number of bills designed to address explicit deepfakes, as authorities continue to crack down on adults and minors alike who are caught making and spreading the content. A new wearable geared toward hustle freaks There’s yet another new AI-powered gadget in town, but this sticks to a narrow lane: making audio recording and note-taking less cumbersome. Plaud announced this week the upcoming release of NotePin, a $169 pill-shaped gadget that can record, transcribe, and summarize your notes and conversations. A free starter plan allows for up to 300 minutes of recorded audio per month; for anyone looking to record more, there’s a $79 annual plan that gives another 1,200 minutes per month plus features like speaker-labeling in transcriptions and audio importing. The NotePin, which will be released in September, is aimed at hustle junkies who want to streamline the note-taking process on calls and during meetings. The pitch is simple: All you have to do is turn on the device (which can be worn as a pendant, pinned or clipped to a shirt, or strapped to your wrist), and check back later for AI-generated takeaways.  Plaud already has another voice recorder, the GPT-4o-powered Note, that snaps onto a handset and performs basically the same tasks as its new NotePin. The pitch with NotePin, at least according to cofounder and CEO Nathan Hsu, is portability and convenience: “It’s your always-ready business partner, handling mundane, daily tasks so you can concentrate on what truly drives value in your life and career,” he says in a press release. (A cynic might point out that a smartphone can already record audio free of charge, but the transcription and summary process is admittedly clunkier.) On a macro level, the NotePin’s release suggests wearables might prove more successful as productivity assistants (see also: Limitless, which functions as a memory aid) than all-purpose companions that seek to compete against smartphones. And looking ahead, Hsu envisions his company’s products as key cogs in the eventual creation of digital twins. “If it always listens to you, it learns you, and over time it gets to know your personality, your preferences, your interactions,” he says. “Someday, you’re going to be able to utilize AI to reproduce yourself—create this real digital twin . . . it’s going to be grand.” That is indeed a grand—and potentially concerning—outcome, but for now Hsu’s chief concern is getting people to buy his latest toy. More AI coverage from Fast Company:  OpenAI, Anthropic, and Meta: Tracking the lawsuits filed against the major AI companies This startup uses AI to automatically trim videos—even where there’s no dialogue How Utah and Texas became the face of political deepfakes ahead of the 2024 election Will Google’s $250 million deal with California really help journalism? Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


View Entire Post

Read Entire Article