Environment & Energy
Related: About this forumProjected 10-30% Increase In Natural Gas Power Generation Because Muh AI Datacenters Must Be Fed!!! Oh, And Coal, Too
The explosion of data center development across the United States to serve the artificial intelligence industry is threatening decades of progress cutting greenhouse gas emissions, as utilities lay plans for scores of new gas power plants to meet soaring electricity demand.
EDIT
As part of the U.S. pledge to cut its total greenhouse gas emissions in half by the end of the decade, compared to 2005 levels, President Joe Biden has vowed to eliminate all power grid emissions by 2035. But there are 220 new, gas-burning power plants in various stages of development nationwide, according to the market data firm Yes Energy. Most of those plants are targeted to come online before 2032. Each has a lifespan of 25 to 40 years, meaning most would not be fully paid off much less shut down before federal and state target dates for transitioning power grids to cleaner electricity.
EDIT
The power sector was a bright spot in cutting emissions. They fell steadily over the last few decades, even as electricity use grew. A big factor was the steep drop in coal burning. Coal powered more than half of U.S. electricity in 1990, according to the University of Marylands Center for Global Sustainability. This year, it is less than 20 percent.
But even coal is making a comeback amid the data center boom. In several states, planned retirements of coal plants are already on hold. A Duke Energy executive told Bloomberg News that it will reexamine plans to burn less coal in Indiana if the Trump administration rescinds power plant emission rules. Data centers are behind two-thirds of the new demand for power in the Omaha region, where the Omaha Public Power District has delayed the closure of a major coal plant and is bringing online two large new gas plants. The utility said in a statement that it might purchase green energy credits, called carbon offsets in the future as part of its overall plan to reach net-zero carbon.
EDIT
https://www.washingtonpost.com/climate-environment/2024/11/19/ai-cop29-climate-data-centers/
highplainsdem
(52,367 posts)fraud, for AI slop fake art and video flooding the internet, for AI slop fake books/articles and music, for AI-generated-and-spread misinformation and disinformation, for deepfakes including deepfake porn.
The sheer stupidity of the genAI boom, given all the harm it does, is almost unbelievable.
But it entertains and deceives the gullible, lets students think they can cheat their way through school, and lets greedy company owners and execs think they can dump most of their employees so there's more money for those at the top. It almost always brings out the worst in people.
But it's new tech, so we're supposed to welcome it and adapt to it.
hatrack
(60,934 posts)It's Shiny!! It's New!! It's Thing!!
We will now proceed to kill ourselves for pixels. Literally.
NNadir
(34,664 posts)...image processing of TEM for printed steels.
I'll let him know it's immoral and pornographic to conduct his work.
I'll also be sure to let all the folks i know working on protein dynamics simulations to understand that their work is immoral and pornographic.
You learn something every day.
It is possible of course to generate electricity without releasing dangerous fossil fuel waste, but it's had rather dishonest bad press.
highplainsdem
(52,367 posts)the harm other uses are causing.
NNadir
(34,664 posts)...from abuse.
highplainsdem
(52,367 posts)marketing of those tools mostly encourages those harmful uses.
I doubt valid, non-harmful scientific and medical uses of genAI account for more than a tiny percentage of the money, energy and water it uses.
And the world isn't going to benefit much from a few scientific advances if the population is dumbed down, culture is stolen so corporate-owned mimicry is sold back to us, most jobs are lost, surveillance is worsened, and the environment is seriously harmed.
NNadir
(34,664 posts)...some support for this contention?
What percentage of computational data tools is used for frivolous purposes?
DU certainly uses data services. I'm sure there are people in the coming fascist government would consider it unworthy of water and electricity.
highplainsdem
(52,367 posts)a number have been posted here.
Again, I was not including scientific research among the harms done by genAI (but I will point out that after all the hype about AlphaFold, it was revealed later that with all the hallucinations, traditional checking had to be done - https://biosciences.lbl.gov/2024/01/23/researchers-assess-alphafold-model-accuracy/ ).
The harms I mentioned to society are not "frivolous" and it's wrong to pretend they are, just as it was wrong for you to post above that I'd said that your son's work was "immoral and pornographic" when I'd said nothing remotely like that. And please don't try to confuse data center construction that tech companies have said they need for genAI with computing in general. Google's AI search, for instance, uses 10x the electricity of regular search (and often produces less reliable results, and diverts traffic from websites the data is taken from, hurting the internet).
NNadir
(34,664 posts)...often to support my scientific work and given the richness of the literature, its vast scope, I certainly wish I had something like CCU-Lama, which I described in the science forum: CCU-Llama.
Who's going to monitor the "correct use" of servers? The Trump administration?
It's funny, because just the other day I was having a conversation with another scientist about whether we should always trust the sophisticated software we use, both on line and in house, to interpret mass spec data. I'm so old, of course, that I remember sitting with a pencil and paper and calculating the mass of potential fragments and then looking at the data to see if such a mass was there visually. I could spend a week or more with a complex compound in that way. Now, in less than a few minutes, I can see all the PTMs and sequences from a very large protein, no trouble at all. I almost never find a result that seems to be invalidated by experiment, unless it involves an isobaric species, and now their are ways around that as well. The public servers, like Uniprot, do, must do something very much like AI, although honestly I don't know how it works, just that it's as fast as hell and I have direct experience with it being perfectly correct on multiple occasions, for example, finding the exact correct species of associated with a highly conserved protein found across many living things when I was blinded. And trust me, the protein in question is highly conserved across a wide range of species, from single cell organisms to human beings.
However, we can and do, set false discovery rates in the use of the software, and that is designed to establish the error parameters. The fact that there is a "false discovery rate," means that we have to be careful with the data, it is not determinative so much as (highly) suggestive.
Your link, by the way, refers to an article referring to a paper, this one: Terwilliger, T.C., Liebschner, D., Croll, T.I. et al. AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination. Nat Methods 21, 110116 (2024). It certainly doesn't discount the value of Alphafold, remarking that it often produces results that are remarkably similar to crystallized proteins, which, as the authors note does not necessarily correspond to protein structure in vivo.
To wit:
From the conclusion of the paper:
To me, this doesn't read like a dismissal of Alphafold, but rather a wise cautionary suggestion as to how it should be used.
Of course, in the history of science, there have been many calculations that proved to not hold up to experimental data. Experiment always prevails over theory, or should anyway. One should always check results against theory, and in fact, that is what automated machine learning does, compares data with theory to determine whether theory holds, and adjusts the theory accordingly, but yes, the output of this process needs human review.
None of this means that there is something corrupt or illegitimate with data centers. My remark about my son's work was intended not to be "right" or "wrong," but rather to suggest that we ought to be careful with how we judge technologies. Sure there are kids who produce term papers on ChatGPT. That doesn't mean that ChatGPT is evil. I have an assistant, not a scientist, who brings me text from it regularly, with my knowledge. It often fails the Turing test, but recognizing that it fails the Turing test often, it can help unblock writer's block. We never use it directly in our reports, but it suggests, not defines, a path.
OKIsItJustMe
(20,763 posts)AI won't kill us all but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future so it's inclusive and transparent.