Scientific synthesis is resource-intensive. Those who have contributed to IPCC or similar processes know this: the coordination, the iteration, the careful weighing of heterogeneous evidence. It is demanding work, and the pool of experts willing to undertake it is finite. Where might AI meaningfully assist without undermining what makes assessment trustworthy? We explored this question in a collaboration between Google DeepMind, the PIK - Potsdam Institute for Climate Impact Research, ETH Zürich, University of Zurich, University of Miami, Imperial College London, and Stripe. Thirteen climate scientists. The test case: AMOC stability, a domain with genuinely conflicting lines of evidence, where synthesis requires more than literature aggregation. AI proved useful for structure, retrieval, and consistency checking. It helped reduce coordination overhead. The efficiency gain was real: 79 papers synthesised through 104 revision cycles in under 47 person-hours. But the intellectual core remained human. Experts contributed 58% of the final content. Not stylistic refinement, but substantive judgment: reweighting conflicting evidence, inserting caveats, revising confidence levels where the AI had interpolated consensus that wasn't there. AI provided breadth. Humans provided judgment. This reflects a deeper distinction: synthesis is not review. An IPCC chapter does not summarise papers. It weighs conflicting evidence and produces calibrated uncertainty statements that exist in no single source. That operation remained human. Our work serves as starting point, not as a conclusion. Perspectives from colleagues welcome. Preprint: https://lnkd.in/dwsck3Gd Markus Leippold, Anna Koivuniemi #ScientificAssessment #ClimateScience #AIforScience
AI In Environmental Monitoring
Explore top LinkedIn content from expert professionals.
-
-
𝗔𝗜 𝗳𝗼𝗿 𝗚𝗢𝗢𝗗: 𝗡𝗔𝗦𝗔 𝗮𝗻𝗱 𝗜𝗕𝗠 𝗹𝗮𝘂𝗻𝗰𝗵 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗔𝗜 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝘄𝗲𝗮𝘁𝗵𝗲𝗿 𝗮𝗻𝗱 𝗰𝗹𝗶𝗺𝗮𝘁𝗲 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴! 🌍 (𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝗴𝗲𝘁 𝗺𝗼𝗿𝗲 𝘀𝗽𝗼𝘁𝗹𝗶𝗴𝗵𝘁 𝗽𝗹𝗲𝗮𝘀𝗲 𝗮𝗻𝗱 𝗡𝗢𝗧 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 𝗪𝗿𝗮𝗽𝗽𝗲𝗿!) In collaboration with NASA, IBM just launched Prithvi WxC an open-source, general-purpose AI model for weather and climate-related applications. And the truly remarkable part is that this model can run on a desktop computer. 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗸𝗻𝗼𝘄: ⬇️ → The Prithvi WxC model (2.3-billion parameter) can create six-hour-ahead forecasts as a “zero-shot” skill – meaning it requires no tuning and runs on readily available data. → This AI model is designed to be customized for a variety of weather applications, from predicting local rainfall to tracking hurricanes or improving global climate simulations. → The model was trained using 40 years of NASA’s MERRA-2 data and can now be quickly tuned for specific use cases. And unlike traditional climate models that require massive supercomputers, this one operates on a desktop. Uniqueness lies in the ability to generalize from a small, high-quality sample of weather data to entire global forecasts. → This AI-powered model outperforms traditional numerical weather prediction methods in both accuracy and speed, producing global forecasts up to 10 days in advance within minutes instead of hours. → This model has immense potential for various applications, from downscaling high-resolution climate data to improving hurricane forecasts and capturing gravity waves. It could also help estimate the extent of past floods, forecast hurricanes, and infer the intensity of past wildfires from burn scars. It will be exciting to see what downstream apps, use cases, and potential applications emerge. What’s clear is that this AI foundation model joins a growing family of open-source tools designed to make NASA’s vast collection of satellite, geospatial, and Earth observational data faster and easier to analyze. With decades of observations, NASA holds a wealth of data, but its accessibility has been limited — until recently. This model is a big step toward democratizing data and making it more accessible to all. 𝗔𝗻𝗱 𝘁𝗵𝘀 𝗶𝘀 𝘆𝗲𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗽𝗿𝗼𝗼𝗳 𝘁𝗵𝗮𝘁 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗜 𝗶𝘀 𝗼𝗽𝗲𝗻, 𝗱𝗲𝗰𝗲𝗻𝘁𝗿𝗮𝗹𝗶𝘇𝗲𝗱, 𝗮𝗻𝗱 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗮𝘁 𝘁𝗵𝗲 𝗲𝗱𝗴𝗲. 🌍 🔗 Resources: Download the models from the Hugging Face repository: https://lnkd.in/gp2zmkSq Blog post: https://ibm.co/3TDul9a Research paper: https://ibm.co/3TAILXG #AI #ClimateScience #WeatherForecasting #OpenSource #NASA #IBMResearch
-
AI is completely rewriting the rules of weather forecasting, and this video from NVIDIA is a perfect example of how fast things are moving. In just under 5 minutes, the video demonstrates Earth-2, a platform that allows you to run global weather forecasts in mere seconds using just a few lines of Python. You can seamlessly switch between data sources (like ERA5, GFS, IFS) and even swap out entire AI models (like FourCastNet, GraphCast, or Aurora) with a single line of code. But NVIDIA isn’t alone. We are witnessing an arms race among big tech to solve weather prediction: - Google DeepMind has GraphCast and NeuralGCM, which have already outperformed gold-standard physical models in many metrics. - Microsoft released Aurora, a foundation model trained on over a million hours of data, claiming to be 5000x faster than traditional numerical systems. - IBM & NASA recently open-sourced Prithvi, a "geospatial foundation model" designed not just for weather, but to be fine-tuned for specific climate applications. - Huawei has Pangu-Weather, which famously predicted the path of a typhoon more accurately than traditional methods. Why is this happening? - Compute: Traditional Numerical Weather Prediction (NWP) solves complex physics equations requiring massive supercomputers. AI models, once trained, infer results in seconds on a few GPUs. - Ensemble Forecasting: Because they are so cheap to run, we can generate thousands of scenarios (ensembles) instead of just a few. This is a game changer for predicting low probability extreme weather events. - Data Fusion: These models are proving incredibly good at learning patterns from historical data that pure physics equations might miss. For the geospatial practice, this is a big change. Weather is moving from a static dataset we download to a dynamic capability we run. You no longer need a supercomputer to generate high-resolution forecasts; you just need a GPU and a Python script. We may soon see fine-tuned weather models for specific geospatial use cases like hyper local wind for drones, precise precip for agriculture, or cloud cover for satellite tasking. The latency between data in and forecast out is shrinking to near zero, enabling true real time geospatial intelligence. Have you tried any of these models? What are your thoughts? 🌎 I'm Matt Forrest and I talk about modern GIS, earth observation, AI, and how geospatial is changing. 📬 Want more like this? Join 12k+ others learning from my daily newsletter → forrest.nyc
-
How AI is changing storm response in the U.S. — technically. Have you experienced it? Extreme weather response is no longer driven by single forecasts. It’s driven by ensembles + AI acceleration + real-time data fusion. Here’s what’s happening under the hood: AI-accelerated Numerical Weather Prediction (NWP) Deep learning models (graph neural nets, transformers) are trained on decades of reanalysis data to approximate full physics-based solvers. Result: • Inference in seconds instead of hours • Enables rapid ensemble generation (hundreds of scenarios, not dozens) This allows forecasters to update storm tracks and intensity continuously, not on fixed cycles. Multi-modal data fusion AI ingests: • Satellite imagery (GOES) • Doppler radar volumes • Ocean buoys & atmospheric soundings • Ground IoT sensors • Historical climatology Models correlate spatial-temporal patterns across modalities — something classical models struggle with at scale. Severe weather nowcasting Computer vision models detect: • Convective initiation • Tornadic signatures • Rapid intensification signals Lead times improve by 30–60 minutes for fast-forming events — which is operationally massive for emergency management. Probabilistic forecasting, not single answers ML-driven ensembles output probability distributions, not deterministic paths: • Flood depth likelihoods • Wind gust exceedance • Ice accumulation risk This feeds directly into risk-based decision systems. Infrastructure impact modeling Utilities combine AI weather outputs with: • Grid topology • Asset age & failure history • Load forecasts This enables pre-storm optimization: • Crew pre-positioning • Targeted grid isolation • Faster restoration paths Operational decision intelligence AI systems now bridge forecast → action: • When to evacuate • Where to stage responders • Which assets fail first This is no longer meteorology alone — it’s real-time systems engineering. Storms are getting more chaotic. Our response is getting more computational. AI doesn’t replace physics. It compresses it into time we can actually use. #AI #WeatherModeling #Nowcasting #ClimateTech #InfrastructureAI #DigitalTwins #ResilienceEngineering #HPC
-
Flash flooding is becoming more frequent and less predictable across the U.S. In the Appalachian region, communities often get only a few hours of warning, putting lives, infrastructure, and local economies at risk. Through the #IBMImpactAccelerator, IBM is collaborating with the University of Illinois Urbana-Champaign Center for Secure Water to change that, with the project coordinated by Professor Ana Barros from the Civil and Environmental Engineering at Illinois department. Pairing Illinois’ hydrology and precipitation modeling with IBM technologies like watsonx.ai, IBM Cloud for Government, and Cloud Pak for Data, the team is improving rainfall prediction and flood forecasting in complex mountainous terrain. Two key innovations are emerging: 💡Enhanced Precipitation Forecasting, which uses AI to correct errors in leading weather models 💡Flood View, a tool that integrates this enhanced rainfall data with hydrology models, delivering earlier flash flood warnings through an interactive map, alerts, and local watershed insights Flood View is already supporting the U.S. National Park Service (NPS). NPS is using Flood View to strengthen disaster preparedness by planning road and park closures in advance and monitoring specific points of interest across the parks. With more reliable forecasts, extending lead time from roughly six hours to up to 48 hours, communities gain critical time to prepare, protect infrastructure and stay safe. Watch the full video to learn how AI, research, and public-sector collaboration are strengthening climate resilience in the U.S.: https://lnkd.in/eSCVq_VW
-
Over the past few days I’ve been reading the Stockholm Resilience Centre’s new report AI for a Planet Under Pressure. It’s a very comprehensive examination of how AI can support sustainability research, climate science and planetary resilience. It’s an important contribution at exactly the moment when society needs clarity on both the opportunities and the risks of advanced digital technologies. A few reflections from me: AI is already accelerating scientific discovery The report shows that AI is helping researchers uncover patterns in climate, biodiversity, freshwater and urban systems that were previously too complex or computationally demanding to analyse. From high-resolution climate downscaling to modelling Earth system tipping points, AI is opening new windows into how the planet is changing and where intervention is most urgent. This is not just for relevant for scientific discovery. Climate science is becoming more predictive, more integrated, and more accessible One of the strongest messages is that AI is enabling a shift from isolated models to integrated cross-system understanding. For example, foundation models trained on vast geophysical datasets can support multiple climate-related tasks, from cyclone tracking to air quality forecasts, with far lower computational costs. This has real implications for researchers and public agencies that previously lacked the resources to run such models. But the benefits will depend on how responsibly we apply these tools The report is refreshingly balanced. It highlights very real concerns: the environmental footprint of compute, bias in data and models, uneven global representation in AI research, and the risk of over-reliance on systems that may still contain blind spots. Crucially, it argues that AI must support, not substitute, scientific judgement and local knowledge. What I find most compelling is the call for an “AI for sustainability science” agenda. This means moving beyond pilots and experiments and investing in the infrastructures, skills and governance frameworks that allow AI to strengthen climate research while remaining aligned with planetary boundaries and social equity. In other words: more capability, yes but also more responsibility, transparency and inclusion. For those of us working at the intersection of digitalisation, sustainability and resilience, this report is a timely reminder: AI’s contribution to climate action won’t be measured by novelty, but by whether it helps societies anticipate risks, steward ecosystems, and make better collective decisions under pressure. Well worth a read! https://lnkd.in/eMheXkkx #AI #Sustainability #ClimateScience #DigitalTransformation #Resilience #Research #TechForGood
-
At #COP29 this year, climate adaptation, and the role of technology to support with early warning systems, adaptation and resiliency was high on the agenda. One area Google has been working on for a number of years is using AI to help forecast riverine floods, and I'm excited about our recent expansion: 🌎 Expanding coverage of our AI-powered riverine flood forecasting model to 100 countries (up from 80) in areas where 700m people live (up from 460m). 🔮 An improved flood forecasting model — which builds upon our breakthrough model — that has the same accuracy at a seven-day lead time as the previous model had at five days. 📖 Making our model forecasts available to researchers and partners via an upcoming API and our Google Runoff Reanalysis & Reforecast (GRRR) dataset. 👐 Providing researchers and experts with expanded coverage — based on “virtual gauges” for locations where data is scarce — via an upcoming API, the GRRR dataset, as well a new expert data layer on Flood Hub with close to 250,000 forecast points of our Flood Forecasting model, spread over 150 countries. 🕰️ Making historical datasets of our flood forecasting model available, to help researchers understand and potentially reduce the impact of devastating floods. Check out this blog from Yossi Matias for more information https://lnkd.in/ePYJQ-qN
-
Precipitation is one of the most challenging variables to accurately simulate in global climate models as it depends on small-scale physical processes. In our latest research published in 𝘚𝘤𝘪𝘦𝘯𝘤𝘦 𝘈𝘥𝘷𝘢𝘯𝘤𝘦𝘴, we describe an advancement in our hybrid atmospheric model, NeuralGCM, which now leverages AI trained directly on NASA satellite observations to improve global precipitation simulations. Key results of this work: 👉 Physics-AI Integration: The model combines a traditional fluid dynamics solver for large-scale processes with AI neural networks that learn to account for the effects of small-scale physics, specifically precipitation. 👉 Improved Extremes: NeuralGCM demonstrates significant improvements in capturing the intensity of the top 0.1% of extreme rainfall events, better representing heavy precipitation than many traditional models. 👉 Long-Term Accuracy: In multi-year simulations, the model achieved a 40% average error reduction over land compared to leading atmospheric models used in the latest Intergovernmental Panel on Climate Change (IPCC) report. 👉 Daily Patterns: It more accurately reproduces the timing of peak daily precipitation, which is critical for hydrology and agricultural planning. We are already seeing the value of this approach in the field. A partnership between the University of Chicago and the Indian Ministry of Agriculture recently used NeuralGCM in a pilot program to help predict the onset of the monsoon season. NeuralGCM is part of our Earth AI program to better understand the physical earth in ways that benefit society. We have made the code and model checkpoints openly available to the community. Read the full details on the Google Research blog by Janni Yuval: goo.gle/4qH63sU Paper: https://lnkd.in/d7E4US4W
-
New study in Nature Cities: Mapping machine learning for urban climate change mitigation 🌍 Our systematic map of 2,300 studies, published between 1994 and 2024, shows that: 📈 Research on ML for urban climate mitigation is growing faster than the urban climate change mitigation field as a whole. 🚲 Most work focuses on transport (41%) and buildings (25%), especially shared bikes, micro-mobility, and energy use in buildings. ⚡ Some high-potential areas—like street network design, efficient new construction, and fuel switching—remain understudied. 🌏 Strong geographic bias towards data-rich and high-income regions: 83% of studies focus on cities in East Asia, North America, and Europe, with far fewer on rapidly urbanizing regions. 🛠️ ML helps cities by informing (data/monitoring), modelling (forecasting), and optimizing (operations like traffic, HVAC, waste systems). ⚠️ Research priorities are often shaped by data availability and commercial interests, not just climate impact. 🔑 We derive eight recommendations: - Expand research to understudied but high-impact areas, - Report actual GHG savings, - Evaluate demands by practitioners, - Foster AI system inter-operability and governance, - Conduct future syntheses on full-text, - Address regional gaps in data coverage, - Investigate the relevance of ML through a community-oriented approach, - Engage in educational programs and interdisciplinary communities. 🥳 Proud to have co-authored this article with Nikola Milojević-Dupont, Felix Creutzig, Tim R., and Lynn Kaack! PIK - Potsdam Institute for Climate Impact Research, Hertie School Data Science Lab, Hertie School, Technische Universität Berlin, What Works Climate Solutions 👉 Full paper: Marie Josefine Hintz, Nikola Milojevic-Dupont, Felix Creutzig, Tim Repke, Lynn H. Kaack. Nature Cities, 2025. “A systematic map of machine learning for urban climate change mitigation” https://lnkd.in/eHqJmqZi #climatecrisis #artificialintelligence #machinelearning #cities #urban #naturecities #research #AI #mitigation #transport #buildings
-
Today, we published a preprint of a study in which we carried out a climate science assessment supported by an “AI co-scientist”. We integrated an AI system into a standard scientific workflow and applied it to a challenging topic: the stability of the Atlantic Meridional Overturning Circulation (AMOC). For this study, we synthesised and assessed the available literature through 104 structured revision cycles, supported by an AI interface. The system did not replace us, but it assisted us with structuring draft text, checking the coherence of arguments, and performing consistency and traceability checks against the literature. Importantly, it also recorded the history of statements — including discussions, evidence considered, and the reasoning underlying the final assessments. The workflow resulted in substantial time savings relative to a conventional assessment process. This matters because scientific assessments, including those of the #IPCC, rely heavily on voluntary expert contributions. Time is a scarce resource in such exercises, and many colleagues hesitate to participate due to the workload alongside research, teaching, and personal commitments. As inclusivity in assessments is key, methods that are both robust and scalable will be needed. Here we have explored a first example of how AI-supported workflows may contribute without reducing scientific rigour. Preprint: https://lnkd.in/ewMn2mPt Besides the excitement, we also learned about limitations. The AI struggled with handling multiple references, was at times slow, had challenges appreciating epistemic limitations, etc. Having a diligent human expert in the driving seat was important at every part of the ride. I would be interested to hear perspectives from colleagues working on assessments, collaborative science, and trustworthy AI. #ClimateScience #AIforScience #IPCC #ScientificAssessment #ClimateChange