It was the turn of the Atmosphere Theme for the CEEDS seminar on 19th May, where we heard a breadth of examples of data science approaches applied to atmospheric science issues all under the title of “Hands across the disciplines: How data science has helped atmospheric science and associated societal challenges.” The seminar was organised by Atmosphere Theme leads Paul Young and Lindsay Banin. The talks are summarised below and are available as a recording for those that were not able to attend.
Lily Gouldsbrough (twitter), a second year PhD student in the Lancaster Environment Centre (LEC), talked about the development and application of an extreme value theory model to UK ozone measurements from the national AURN network. Lily’s work brings state-of-the-art statistical approaches to a field where the analysis is typically less sophisticated, and she has been able to calculate “return periods” for policy-relevant ozone levels as a function of temperature. One key result is that “moderate” levels of ozone pollution (>100 µg/m3) are no longer extreme at many sites in SE England when the temperatures are hotter than the 99th percentile.
Pete Levy (web), a scientist at the UK Centre for Ecology and Hydrology spoke about his work validating the bottom up estimates of the UK’s greenhouse gas emissions inventory. Pete described a Bayesian approach that leveraged non greenhouse gas pollutant concentrations from the AURN network, but for which there is sector-specific information on their sources. His framework used that information combined with estimates of the temporal and spatial distribution of pollutant emissions data (e.g., from EDGAR) to improve the estimates of the UK’s greenhouse gas emissions, as well as providing information on the uncertainties in those emissions for individual sectors.
Matt Amos (web, twitter), a final year PhD student in LEC discussed his work developing a Bayesian neural network to combine “gappy” satellite observations with gap free process models. Matt’s domain focus is on stratospheric ozone, where we need continuous datasets to understand long term trends, which are driven by both weather/climate variability and the decreases in (most) ozone depleting substances. His neural network has several advantages of other methods of data infilling, not least in that it provides a principled measure of uncertainty. This work appeared in the recent NeurIPS conference (paper on arxiv) and those interested can try a version of the model on Binder.
Finally, Cristina Martin Hernandez (web), a data analyst at UKCEH, talked about her work linking up the Air Pollution Interface System (APIS) with the Air Pollution Impacts on Ecosystem Networks (APIENs) through a new mapping interface. APIS is used by agencies, consultancies, students and anyone interested in air pollution impacts. Cristina’s talk covered how APIS’s current text-based version has been upgraded and developed to use R Shiny, all in order to allow users to extract data about the pollution impact on an ecosystem in a given area with a click and see the data mapped in seconds. Cristina demonstrated the (successful!) use of the new interface and also talked about the design and user engagement process.
In wrapping up the seminar, the panel reflected on the rich opportunities and challenges in working across scientific and data science domains, highlighting the need for advancing technical skills amongst environmental scientists and collaborating with maths, statistics and computing specialists to meet the demands of ever growing datasets and models in atmospheric sciences.