AI impact on Emissions and Implications for SCI #71
Replies: 8 comments 4 replies
-
How about inference? That's the biggest culprit at this point |
Beta Was this translation helpful? Give feedback.
-
Adding link to previous GSF work - Evaluating SCI for foundation models at Inference Stage - https://github.com/Green-Software-Foundation/eval_sci_of_foundation_models/blob/main/Report/Final_Report.pdf |
Beta Was this translation helpful? Give feedback.
-
CodeCarbon is doing a great job about measure carbon emissions using AI. Besides that, there are good articles in Frontiers in Environmental Science related to Artificial Intelligence Applications in Reduction of Carbon Emissions: Step Towards Sustainable Environment. And the Reducing the carbon emissions of AI article is a good summary about how to reduce carbon emissions in AI. |
Beta Was this translation helpful? Give feedback.
-
I've written about the electricity consumption and carbon footprint of training vs inference in the links below. For large scale deployment of LLMs, the deployment / inference stage of the life cycle is for sure the bigger culprit. https://towardsdatascience.com/chatgpts-electricity-consumption-7873483feac4 https://towardsdatascience.com/environmental-impact-of-ubiquitous-generative-ai-9e061bac6800 |
Beta Was this translation helpful? Give feedback.
-
This diagram shows the estimated carbon footprint of the hardware manufacturing, model training and model deployment stages in a large scale generative AI adoption scenario (large scale = 3.5B users, 30 daily queries each). Even at smaller user or query numbers, the picture is the same: deployment uses much more energy, thus has a larger footprint. Source: https://towardsdatascience.com/environmental-impact-of-ubiquitous-generative-ai-9e061bac6800 |
Beta Was this translation helpful? Give feedback.
-
The rate of change is very high in this space, and there is a lot of research and progress into drastic reductions in the compute, cost and hence carbon footprint of training, inference, fine tuning and prompting. The growth of fine tuning and prompting as workloads means that new applications can be developed without training from scratch, which drastically reduces the emissions. However Jevons paradox applies, and we are seeing a big increase in the number of possible applications enabled by the reductions in cost. |
Beta Was this translation helpful? Give feedback.
-
@atg-abhishek wrote this interesting article: https://thegradient.pub/sustainable-ai/ |
Beta Was this translation helpful? Give feedback.
-
AI article overview provided by @atg-abhishek - https://thegradient.pub/sustainable-ai/ WG agree to close Issue |
Beta Was this translation helpful? Give feedback.
-
Starting a discussion around the emissions implications of training large AI models.
Beta Was this translation helpful? Give feedback.
All reactions