The rush to build artificial intelligence at any cost has been fueled by the Trump administration, which has also rolled back environmental protections.
Despite these headwinds, AI sustainability researcher Sasha Lucioni believes the demand for greater transparency in AI, from companies and individuals alike, is higher than ever from customers.
Lucioni became a leader in trying to create more transparency around AI emissions and environmental impacts during her four years at Hugging Face, an AI company, including pioneering a leaderboard that documents the energy efficiency of open source AI models. She has also been an outspoken critic of major AI companies that, she says, intentionally withhold energy and sustainability information from the public.
Now she’s co-founding the Sustainable AI Group, a new venture with former Salesforce chief sustainability officer Boris Gamazychikov. They will focus on helping companies answer, among other things, “What levers can we play in order to make customers a little less miserable?” Lucioni is also interested in studying the energy needs of different types of AI tools, such as translating speech into text, or converting images into video, an area she says has not been well studied yet.
Lucioni sat down exclusively with WIRED to talk about the demand for sustainable AI and what exactly she wants to see from big tech companies.
This interview has been edited for length and clarity.
Wired: I hear a lot of people who are concerned about the environment and the use of AI, but I don’t hear a lot of companies thinking about this. What specifically have you heard from people working with AI in their businesses, and what concerns them?
Sasha Lucioni: First, they’re under a lot of pressure from employees, pressure from the board, pressure from managers, like, “You have to quantify this.” Their employees say, “You’re forcing us to use Copilot, so how does that impact our ESG goals?”
For most companies, AI has become a core part of their business offerings. In this case, they need to understand the risks. They have to understand where the models work. They cannot continue to use the models as they do not even know the location of the data centers or the network they are connected to. They have to know what supply chain emissions are, transportation emissions, all these different things.
It’s not about not using AI. I think we’re past that. It’s choosing the right models, for example, or sending a signal that power supply is important, so that customers are willing to pay a little more for data centers powered by renewable energy. There are ways to do this, and it’s about finding believers in the right places.
And I also imagine that for global companies, the sustainability situation is very different than it is in the United States, right? The US government may not care about this, but other governments certainly do.
In Europe, they have an EU AI law. Sustainability has been a very big part of that from the beginning. They put a bunch of provisions in there, and now the first reporting initiatives are starting to emerge.
Even Asia is trying to be more transparent. The International Energy Agency was carrying out these reports [on AI and energy use]. I was talking to them, and they were saying, other countries realize that the IEA gets their numbers from countries, and countries don’t have those numbers for data centers specifically. They can’t make future choices, because they need the numbers to know, “Okay, well, this means we need X capacity, in the next five years” or whatever. [Some countries] I started putting pressure on data center builders.