<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Dave Luo on Medium]]></title>
        <description><![CDATA[Stories by Dave Luo on Medium]]></description>
        <link>https://medium.com/@anthropoco?source=rss-1ab5b7b60071------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 12 Apr 2026 08:39:43 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@anthropoco/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Open Cities AI Challenge]]></title>
            <link>https://medium.com/data-science/the-open-cities-ai-challenge-3d0b35a721cc?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/3d0b35a721cc</guid>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[africa]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[mapping]]></category>
            <category><![CDATA[towards-data-science]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Mon, 17 Feb 2020 19:35:31 GMT</pubDate>
            <atom:updated>2020-07-09T16:24:45.050Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eB_nQ3_cB1mtjc-IndZ-fw.png" /><figcaption>Building footprints hand-labeled by community mapping teams overlaid (in yellow outline) on high resolution drone imagery in Dar es Salaam, Tanzania from the Open Cities AI Challenge dataset.</figcaption></figure><h4>Segment buildings in African cities from aerial imagery and advance Responsible AI ideas for disaster risk management</h4><p><em>Post by </em><a href="https://medium.com/@anthropoco"><em>Dave Luo</em></a><em>, </em><a href="https://www.linkedin.com/in/g-doherty"><em>Grace Doherty</em></a><em>, </em><a href="https://www.linkedin.com/in/nick-jones-302a945/"><em>Nicholas Jones</em></a><em>, and </em><a href="https://www.linkedin.com/in/vivien-deparday/"><em>Vivien Deparday</em></a><em>, GFDRR Labs/World Bank</em></p><h3>Takeaways</h3><p>The <a href="http://www.gfdrr.org/">Global Facility for Disaster Reduction and Recovery</a> (GFDRR) is partnering with <a href="https://www.azavea.com/">Azavea</a> and <a href="https://www.drivendata.org/">DrivenData</a> to introduce a new dataset and machine learning (ML) competition ($15,000 in total prizes) to improve mapping for resilient urban planning. Better ML-supported mapping for disaster risk management means addressing barriers to applying ML in African urban environments and adopting best practices in geospatial data preparation to enable easier ML usage. The competition dataset — over 400 square kilometers of high-resolution drone imagery and 790K building footprints — is sourced from locally validated, open source community mapping efforts from 10+ urban areas across Africa. Prize-winning solutions will be published as open-source tools for continued ML development and benchmarking.</p><p>The Open Cities AI Challenge has two participation tracks:</p><ol><li>$12,000 in prizes for best open-source semantic segmentation of building footprints from drone imagery that can generalize across a diverse range of African urban environments, spatial resolutions, and imaging conditions.</li><li>$3,000 in prizes for thoughtful explorations of Responsible AI development and application for disaster risk management. How might we improve the creation and use of ML systems to mitigate biases, promote fair and ethical use, inform decision-making with clarity, and make safeguards to protect users and end-beneficiaries?</li></ol><p>The competition is ongoing and ends March 16th, 2020.<a href="http://drivendata.org/competitions/60/building-segmentation-disaster-resilience"> Join today</a>!</p><h3>Open Data for Resilient Urban Planning</h3><p>Cities around the world are growing rapidly, especially in Africa — by 2030, half of Sub-Saharan Africa’s population will live in urban areas. As urban populations grow, their exposure to flooding, erosion, earthquakes, coastal storms, and other hazards becomes a complex challenge for urban planning.</p><p>Understanding how assets and people are vulnerable to these risks requires detailed, up-to-date geographic data of the built environment. For example, a building’s particular location, shape, and construction style can tell us whether it will be more exposed to earthquake or wind damage than nearby buildings. Roads, buildings, and critical infrastructure need to be mapped frequently, accurately, and in detail if we are to <a href="https://understandrisk.org/">understand and manage risk</a> effectively. But in countries with less developed data infrastructure, <a href="https://www.gfdrr.org/en/feature-story/how-open-cities-changing-way-african-cities-prepare-disaster">traditional urban data collection methods can’t keep up</a> with increasing density and sprawl.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1001/0*OSO2sSnMI_hapM_B" /><figcaption>A field mapper from Open Cities Accra observes standing water and refuse in a flood-prone neighborhood of Accra, Ghana. Photo courtesy of Gabriel Joe Amuzu, <a href="https://twitter.com/amuzugabrieljoe">Amuzujoe Photography</a>.</figcaption></figure><p>Thankfully, collaborative and open data collection practices are reshaping the way we map cities. Today, <a href="http://blogs.worldbank.org/sustainablecities/rise-local-mapping-communities-for-resilience">local mapping communities are improving maps for some of the world’s most vulnerable neighborhoods</a> — bringing highly accurate and detailed geographic data up-to-date and to scale. <a href="http://www.gfdrr.org/">GFDRR</a> at the World Bank supports programs like <a href="https://opencitiesproject.org/">Open Cities Africa</a> and <a href="http://ramanihuria.org/">Dar Ramani Huria</a> to map buildings, roads, drainage networks and more in over a dozen African cities, and <a href="http://www.zanzibarmapping.com/">Zanzibar Mapping Initiative</a> was the world’s largest aerial mapping exercise using consumer drones and local mappers to produce open spatial data for conservation and development in the archipelago.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F328t90vkots%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D328t90vkots&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F328t90vkots%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/2d1daf56c118d0adc7c2e3f725b6cce2/href">https://medium.com/media/2d1daf56c118d0adc7c2e3f725b6cce2/href</a></iframe><p>Data collected in these community mapping programs are used to design tools and products that support government decision-making. Digitized maps are published to <a href="https://www.openstreetmap.org/">OpenStreetMap</a> and aerial imagery to <a href="https://openaerialmap.org/">OpenAerialMap</a> where they serve as data public goods that can be used and improved by all. The open source philosophy behind the movement and an emphasis on local skill-building has fostered local networks of talent in digital cartography, robotics, software development, and data science.</p><h3>Potential of Machine Learning for Mapping</h3><p>Advances in ML for visual tasks could further improve mapping quality, speed, and cost. Recent examples of ML applications for mapping include Facebook’s <a href="https://mapwith.ai/">AI-assisted mapping tool</a> for OpenStreetMap and Microsoft’s country-scale automated building footprint extraction (in <a href="https://github.com/microsoft/USBuildingFootprints">USA</a>, <a href="https://github.com/microsoft/CanadianBuildingFootprints">Canada</a>, <a href="https://github.com/microsoft/Uganda-Tanzania-Building-Footprints">Tanzania and Uganda</a>). Competitions like <a href="https://spacenet.ai/">SpaceNet</a> and <a href="https://xview2.org/">xView2</a> advance ML practices for automated mapping of roads, buildings, and building damage assessment after disasters.</p><p>Obstacles, however, stand in the way of effectively applying current ML mapping solutions to the African disaster risk management context. Africa’s urban environments differ significantly in make-up and appearance from European, American, or Asian cities which have more abundant data that ML models are often trained on.</p><h4><strong><em>Buildings that are more densely situated and diverse in shape</em></strong><em>, construction style, and size may be less recognizable to ML models that saw few or no such examples in their training.</em></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*MCBzw3RHERyjOBlN" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*d7tyJFgR9WDvhLOz" /><figcaption>Comparing urban built environments of Las Vegas, USA (left) to Monrovia, Liberia (right) at the same visual scale. Imagery courtesy of Microsoft Bing Maps and Maxar (DigitalGlobe)</figcaption></figure><h4><strong><em>Imagery is collected by commercial drones at much higher resolution</em></strong> under diverse environmental conditions, requiring adaptation of models usually trained on lower-resolution, more consistently collected and preprocessed satellite imagery.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vaAdgXgdHapw5Y1ci_VA7w.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*A_iPbiCLthgMw1OC" /><figcaption>Comparing urban details at typical satellite image resolution (&gt;30cm/pixel, top) to drone/aerial image resolution (3–20cm/pixel, bottom) in Dar es Salaam, Tanzania. Imagery courtesy of Maxar and OpenAerialMap.</figcaption></figure><h4><strong>Crowdsourced and community-driven data labeling may differ greatly in what base imagery layers are used, workflow, data schema, and quality control, requiring models that are robust to more label noise.</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fsdKOscA3yPyvEj0Z8DwPg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PgcxNzYLShwwpNHk" /><figcaption>Quality of hand-drawn building footprint labels (alignment and completeness) can vary across or within image scenes. Examples from Challenge training dataset for Kampala, Uganda (left) and Kinshasa, DRC (right).</figcaption></figure><h4><strong>Geospatial data comes in a diversity of file formats, sizes, and schemas that create high adoption and knowledge barriers that hamper their use in machine learning.</strong></h4><p>There is now a growing abundance of locally-validated open map data and high resolution drone imagery in diverse built environments. How might we best address these obstacles and enhance the state of practice in machine learning to support mapping for urban development and risk reduction for Africa’s cities?</p><h3>Introducing the Open Cities AI Challenge</h3><h4>Dataset</h4><p>Working with partners <a href="https://www.azavea.com/">Azavea</a> and <a href="https://www.drivendata.org/">DrivenData</a>, the <a href="https://www.gfdrr.org/en/gfdrr-labs">Labs team</a> at GFDRR combined the excellent work of many participatory mapping communities across Africa, applied best practices in cloud-native geospatial data processing (i.e. using <a href="https://www.cogeo.org/">Cloud-Optimized GeoTIFFs</a> [COG] and <a href="https://stacspec.org/">SpatioTemporal Asset Catalogs</a> [STAC]), and standardized wherever possible to make data more readily usable for machine learning. The result is a novel, extensive, open dataset of over 790K building footprints and 400 square kilometers of drone imagery representing 10 diverse African urban areas in ML-ready form.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*zbPzlhu1yIqAYon1" /><figcaption>Comparing hand-labeled building footprints overlaid on drone imagery for 10 African urban areas included in the Challenge training dataset.</figcaption></figure><p>Using COG and STAC for geospatial data provides us with bandwidth-efficient, rapid, and query-able access to our imagery and labels in a standardized format. Ease of access to files and indexing of data catalogs is particularly important for geospatial data which can quickly grow to 100s of gigabytes. It also enables us to tap into the growing ecosystem of COG and STAC tools, like <a href="https://github.com/radiantearth/stac-browser">STAC Browser</a> to rapidly visualize and access any training data asset in a web browser, despite individual image files being up to several GBs and the entire dataset totaling over 70 GBs in size:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*Y-DqyN5S8JWv687Uw5THnA.gif" /><figcaption>Animated demo of using STAC Browser to visualize Challenge training data collections and assets.</figcaption></figure><p><a href="https://github.com/azavea/pystac">PySTAC</a>, a new Python library by <a href="https://www.azavea.com/">Azavea</a>, enables STAC users to load, traverse, access, and manipulate data within catalogs programmatically. For example, reading a STAC catalog:</p><pre>train1_cat = Catalog.from_file(&#39;https://drivendata-competition-building-segmentation.s3-us-west-1.amazonaws.com/train_tier_1/catalog.json&#39;) <br>train1_cat.describe()</pre><pre>* &lt;Catalog id=train_tier_1&gt;<br>    * &lt;Collection id=acc&gt;<br>      * &lt;Item id=665946&gt;<br>      * &lt;LabelItem id=665946-labels&gt;<br>      * &lt;Item id=a42435&gt;<br>      * &lt;LabelItem id=a42435-labels&gt;<br>      * &lt;Item id=ca041a&gt;<br>      * &lt;LabelItem id=ca041a-labels&gt;<br>      * &lt;Item id=d41d81&gt;<br>      * &lt;LabelItem id=d41d81-labels&gt;<br>    * &lt;Collection id=mon&gt;<br>      * &lt;Item id=401175&gt;<br>      ...</pre><p>Inspecting an item’s metadata:</p><pre>one_item = train1_cat.get_child(id=&#39;acc&#39;).get_item(id=&#39;ca041a&#39;)<br>one_item.to_dict()</pre><pre>{<br>  &quot;assets&quot;: {<br>    &quot;image&quot;: {<br>      &quot;href&quot;: &quot;https://drivendata-competition-building-segmentation.s3-us-west-1.amazonaws.com/train_tier_1/acc/ca041a/ca041a.tif&quot;,<br>      &quot;title&quot;: &quot;GeoTIFF&quot;,<br>      &quot;type&quot;: &quot;image/tiff; application=geotiff; profile=cloud-optimized&quot;<br>    }<br>  },<br>  &quot;bbox&quot;: [<br>    -0.22707525357332697,<br>    5.585527399115482,<br>    -0.20581415249279408,<br>    5.610742610987594<br>  ],<br>  &quot;collection&quot;: &quot;acc&quot;,<br>  &quot;geometry&quot;: {<br>    &quot;coordinates&quot;: [<br>      [<br>        [<br>          -0.2260939759101167,<br>          5.607821019807083<br>        ],<br>        ...<br>        [<br>          -0.2260939759101167,<br>          5.607821019807083<br>        ]<br>      ]<br>    ],<br>    &quot;type&quot;: &quot;Polygon&quot;<br>  },<br>  &quot;id&quot;: &quot;ca041a&quot;,<br>  &quot;links&quot;: [<br>    {<br>      &quot;href&quot;: &quot;../collection.json&quot;,<br>      &quot;rel&quot;: &quot;collection&quot;,<br>      &quot;type&quot;: &quot;application/json&quot;<br>    },<br>    {<br>      &quot;href&quot;: &quot;https://drivendata-competition-building-segmentation.s3-us-west-1.amazonaws.com/train_tier_1/acc/ca041a/ca041a.json&quot;,<br>      &quot;rel&quot;: &quot;self&quot;,<br>      &quot;type&quot;: &quot;application/json&quot;<br>    },<br>    {<br>      &quot;href&quot;: &quot;../../catalog.json&quot;,<br>      &quot;rel&quot;: &quot;root&quot;,<br>      &quot;type&quot;: &quot;application/json&quot;<br>    },<br>    {<br>      &quot;href&quot;: &quot;../collection.json&quot;,<br>      &quot;rel&quot;: &quot;parent&quot;,<br>      &quot;type&quot;: &quot;application/json&quot;<br>    }<br>  ],<br>  &quot;properties&quot;: {<br>    &quot;area&quot;: &quot;acc&quot;,<br>    &quot;datetime&quot;: &quot;2018-11-12 00:00:00Z&quot;,<br>    &quot;license&quot;: &quot;CC BY 4.0&quot;<br>  },<br>  &quot;stac_version&quot;: &quot;0.8.1&quot;,<br>  &quot;type&quot;: &quot;Feature&quot;<br>}</pre><p>Learn more about the <a href="https://www.drivendata.org/competitions/60/building-segmentation-disaster-resilience/page/151/">dataset</a> and <a href="https://www.drivendata.org/competitions/60/building-segmentation-disaster-resilience/page/154/">STAC resources</a>.</p><h4>Competition</h4><p>Accompanying the dataset is a competitive machine learning challenge with $15,000 in total prizes to encourage ML experts globally to develop more accurate, relevant, and readily usable open-source solutions to support mapping in African cities. There are 2 participation tracks:</p><h4><a href="https://www.drivendata.org/competitions/60/building-segmentation-disaster-resilience/page/151/"><strong>Semantic Segmentation track</strong></a><strong>: </strong>$12,000 in prizes for the best open-source semantic segmentation models to map building footprints from aerial imagery.</h4><p>The machine learning objective is to segment (classify) every pixel in every image as building or no-building with model performance being evaluated with the Intersection-over-Union metric (aka Jaccard Index):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/520/0*wOvBvfaf7hOxhft1" /></figure><p>Semantic segmentation is useful for mapping because its pixel-level outputs are relatively easy to visually interpret, verify, and use as-is (e.g. in the calculation of built-up surface area) or as inputs to downstream steps (e.g. first segment buildings and then classify attributes about each segmented building like its construction status or roof material).</p><p>Segmentation track participants must also submit at least once to the Responsible AI track to qualify for $12,000 in segmentation track prizes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BpJJCabFIPYBLViE" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PZu9qKS32l1dwm7v" /><figcaption>Example image chip (left) and segmentation (right) from the Challenge dataset.</figcaption></figure><h4><a href="https://www.drivendata.org/competitions/60/building-segmentation-disaster-resilience/page/152/"><strong>Responsible AI track</strong></a><strong>: </strong>$3,000 in prizes will be awarded for best ideas applying an ethical lens to the design and use of ML systems for disaster risk management.</h4><p>ML can improve data applications in disaster risk management, especially when coupled with computer vision and geospatial technologies, by providing more accurate, faster, or lower-cost approaches to assessing risk. At the same time, we urgently need to develop a better understanding of the potential for negative or unintended consequences of their use. With growing attention given to questions of appropriate and ethical ML use for facial recognition, criminal justice, healthcare, and other domains, we have an immediate responsibility to elevate these questions for disaster risk.</p><p>Examples of potential harm that ML technologies present in this space include, but are not limited to:</p><ul><li>Perpetuating and aggravating societal inequalities through the presence of biases throughout the machine learning development pipeline.</li><li>Aggravating privacy and security concerns in Fragility, Conflict and Violence settings through combination of previously distinct datasets.</li><li>Limiting opportunities for public participation in disaster risk management due to increased complexity of data products.</li><li>Reducing the role of expert judgement in data and modeling tasks and in turn increasing probability of error or misuse.</li><li>Inadequately communicating methods, results, or degrees of uncertainty, which increases the chance of misuse.</li></ul><p>ML practitioners and data scientists are uniquely positioned to examine and influence the ethical implications of our work. We ask challenge participants to consider the applied ethical issues that arise in designing and using ML systems for disaster risk management. How might we improve the creation and application of ML to mitigate biases, promote fair and ethical use, inform decision-making with clarity, and make safeguards to protect users and end-beneficiaries?</p><p>This track’s submission format is flexible: participants can submit Jupyter notebooks, slides, blogs, essays, demos, product mockups, speculative fiction, art work, synthesis of research papers or original research, or whatever other format best suits you. Submissions will be evaluated by a panel of judges on thoughtfulness, relevance, innovation, and clarity.</p><h3>What Comes Next</h3><p>This challenge will produce new public goods that advance our state of practice in applying ML for understanding risk in urban Africa; this includes new ML performance benchmarks for building segmentation from aerial imagery in relevant geographies, top-performing solutions for mapping in African cities, and in-depth explorations of how we responsibly create and deploy AI systems for disaster risk management.</p><p>Prize-winning solutions will be published as open-source tools and knowledge and the challenge dataset will remain an open data resource for continued ML development and benchmarking. GFDRR will use lessons learned to inform policies and procurement strategies for using ML for urban mapping and planning.</p><h3>Join the Challenge!</h3><p>The competition is currently running until <strong>March 16, 2020</strong>. With one month to go, there is plenty of time to explore the data and participate in either tracks but don’t delay, join today at:</p><h4><a href="http://drivendata.org/competitions/60/building-segmentation-disaster-resilience"><strong>drivendata.org/competitions/60/building-segmentation-disaster-resilience</strong></a></h4><figure><a href="http://drivendata.org/competitions/60/building-segmentation-disaster-resilience"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hZ7iK_6KDPD1pl4IXObFHQ.png" /></a></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3d0b35a721cc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/data-science/the-open-cities-ai-challenge-3d0b35a721cc">The Open Cities AI Challenge</a> was originally published in <a href="https://medium.com/data-science">TDS Archive</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Segment Buildings on Drone Imagery with Fast.ai & Cloud-Native GeoData Tools]]></title>
            <link>https://medium.com/@anthropoco/how-to-segment-buildings-on-drone-imagery-with-fast-ai-cloud-native-geodata-tools-ae249612c321?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/ae249612c321</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[drones]]></category>
            <category><![CDATA[geospatial]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[tutorial]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Thu, 25 Jul 2019 16:45:59 GMT</pubDate>
            <atom:updated>2019-07-29T21:19:27.852Z</atom:updated>
            <content:encoded><![CDATA[<h3>An Interactive Intro to Geospatial Deep Learning on Google Colab</h3><p>by <a href="https://github.com/daveluo">daveluo</a> (on github and elsewhere)</p><p>In this post and the accompanying <a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb">Google Colab notebook</a>, we’ll learn all the code and concepts comprising a complete workflow to automatically detect and delineate building footprints (instance segmentation) from drone imagery with cutting edge deep learning models.</p><p>All you’ll need is a Google account, an internet connection, and a couple of hours to learn how to make the data &amp; model that learns to make something like <a href="https://alpha.anthropo.co/znz-demo">this</a>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AfQ7_6hcVZOL08ACmq9E_Q.png" /><figcaption>Building segmentation and classification of completeness in Zanzibar, interactive link: <a href="https://alpha.anthropo.co/znz-demo">https://alpha.anthropo.co/znz-demo</a></figcaption></figure><h3>In modular steps, we’ll learn to…</h3><h4>Preprocess image geoTIFFs and manually labeled data geoJSON files into training data for deep learning:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Myn-7f-tLhaMRaaNYg1oHw.png" /><figcaption>Input geoTIFF imagery and GeoJSON label files</figcaption></figure><h4>Create a U-net segmentation model to predict what pixels in an image represent buildings (and building-related features):</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/938/1*4qsYToRH8Q-riSFtxuNWIg.png" /><figcaption>Raw segmentation prediction vs actual</figcaption></figure><h4>Test our model’s performance on unseen imagery with GPU or CPU:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/916/1*kNulsQQdDJAD5xN9Q8qa_A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/914/1*ag6ERcdl-K1-Dj6ddyrBMA.png" /><figcaption>CPU vs GPU inference tile</figcaption></figure><h4>Post-process raw model outputs into geo-registered building shapes evaluated against ground truth:</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*D47afqJ_7l54o-s3" /><figcaption>Raw model output -&gt; geo-registered building shape predictions-&gt; evaluation against ground truth labels</figcaption></figure><h4>And along the way, we’ll get familiar with great geospatial data &amp; deep learning tools/resources like:</h4><ul><li><a href="http://geopandas.org/">Geopandas</a>: “an open source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by <a href="http://pandas.pydata.org/">pandas</a> to allow spatial operations on geometric types.”</li><li><a href="https://github.com/mapbox/rasterio">Rasterio</a>: “reads and writes geospatial raster datasets”</li><li><a href="https://github.com/mapbox/supermercado">Supermercado</a>: “supercharger for <a href="https://github.com/mapbox/mercantile">Mercantile</a>” (spherical mercator tile and coordinate utilities)</li><li><a href="https://github.com/cogeotiff/rio-tiler">Rio-tiler</a>: “Rasterio plugin to read mercator tiles from Cloud Optimized GeoTIFF dataset”</li><li><a href="https://github.com/CosmiQ/solaris">Solaris</a>: “Geospatial Machine Learning Analysis Toolkit” by <a href="https://medium.com/the-downlinq">Cosmiq Works</a></li><li><a href="https://www.cogeo.org/">Cloud-Optimized GeoTIFFs</a> (COG): “An imagery format for cloud-native geospatial processing”</li><li><a href="https://stacspec.org/">Spatio-Temporal Asset Catalogs</a> (STAC): “Enabling online search and discovery of geospatial assets”</li><li><a href="https://openaerialmap.org/">OpenAerialMap</a>: “The open collection of aerial imagery”</li><li><a href="https://www.fast.ai/">Fast.ai</a> for <a href="https://forums.fast.ai/t/geospatial-deep-learning-resources-study-group/31044">geospatial deep learning</a>: “The fastai library simplifies training fast and accurate neural nets using modern best practices” built on the <a href="https://pytorch.org/">PyTorch</a> framework.</li></ul><h4>How to get the most out of this tutorial:</h4><p>The <a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb">Colab notebook</a> is our main learning resource — working interactively within the GPU-accelerated Colab notebook environment is highly recommended!</p><p>Code is organized into modular sections, set up for installation/import of all required dependencies, and executable on either CPU or GPU runtimes (depending on the section). Links to load files generated at each step are also included so you can pick up and start from any section. Inline# comments (&amp; references for further reading) are provided within code cells to explain steps or nuances in more detail as needed. Executing all code cells end-to-end takes &lt;1 hour on GPU.</p><p>This Medium post serves as a high-level conceptual walkthrough and maps directly to sections within the Colab notebook. This post works best as a quick overview with handy bookmarks to Colab (see <strong>[Colab section link]</strong> under each section heading) or viewed side-by-side with the Colab notebook as a code &amp; concept companion set.</p><p>This tutorial assumes you have a working knowledge of Python, data analysis with Pandas, making training/validation/test sets for machine learning, and a beginner practitioner’s grasp of deep learning concepts. Or the motivation to gain what knowledge you’re missing by following the ample references linked throughout this post and notebook.</p><p>With that as mental prep, let’s do some geospatial deep learning!</p><h4>Open tutorial notebook and create your own working Colab copy with File &gt; Save a Copy in Drive (recommended option):</h4><figure><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb"><img alt="" src="https://cdn-images-1.medium.com/max/274/1*dwemLTIIajM4x5DiIzhxWw.png" /></a><figcaption>Click me to get started!</figcaption></figure><h4>Or preview notebook non-interactively (not recommended for optimal learning but good for a quick look):</h4><p><a href="https://nbviewer.jupyter.org/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb">Notebook on nbviewer</a></p><h3>Pre-Processing</h3><h4>Preview and load imagery and labels</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=mRtz_4ue_R4g"><strong>Colab section link</strong></a><strong>]</strong></p><p>For this tutorial, we’ll use the <a href="https://competitions.codalab.org/competitions/20100#learn_the_details">Tanzania Open AI Challenge dataset</a> of 7-cm resolution drone imagery and building footprint labels over Unguja Island, Zanzibar, Tanzania. Much thanks to the following organizations for producing, openly licensing, and making this invaluable dataset accessible:</p><ul><li><a href="https://creativecommons.org/licenses/by/4.0/">CC-BY-4.0</a> licensed by Commission for Lands (COLA) — Revolutionary Government of Zanzibar (RGoZ)</li><li>Labeled data produced &amp; processed by State University of Zanzibar (<a href="https://www.suza.ac.tz/">SUZA</a>), <a href="https://opendri.org/project/zanzibar/">World Bank OpenDRI</a>, <a href="https://werobotics.org/">WeRobotics</a></li><li>Drone imagery created by <a href="http://www.zanzibarmapping.com/">Zanzibar Mapping Initiative</a> and hosted on <a href="https://map.openaerialmap.org/#/39.40040588378906,-5.980094945523311,10/square/3001121111?_k=34xcng">OpenAerialMap</a>:</li></ul><figure><a href="https://map.openaerialmap.org/#/39.40040588378906,-5.980094945523311,10/square/3001121111?_k=34xcng"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-kSPTKsU5vF2c9NEKjOqNQ.png" /></a></figure><p>For simplicity of demonstration, we’ll create training and validation data from a single drone image (in cloud-optimized geoTIFF format) and its accompanying ground-truth labels of manually traced building outlines (in GeoJSON format).</p><p>We’ll work with imagery and labels from image grid znz001 which covers the northern tip of Zanzibar’s main island of Unguja. Here is a <a href="https://geoml-samples.netlify.com/item/9Eiufow7wPXLqQEP1Di2J5X8kXkBLgMsCBoN37VrtRPB/2sEaEKnnyjG2mx7CnN1ESAdjYAEQjoNRxSxTjc4vPGR?si=0&amp;t=preview#15/-5.732621/39.301114">browsable preview</a> of znz001&#39;s drone imagery with its accompanying building outline labels, indexed according to the Spatio-Temporal Asset Catalog (<a href="https://github.com/radiantearth/stac-spec/">STAC</a>) <a href="https://github.com/radiantearth/stac-spec/tree/dev/extensions/label">label extension</a> and visualized in an instance of <a href="https://github.com/radiantearth/stac-browser">STAC browser</a>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Oh8augJZXoNHsUu_" /><figcaption>STAC Browser showing znz001 imagery and labels</figcaption></figure><p>After previewing the labeled data and imagery in the browser, we’ll import our geo-processing tools, copy the direct download URLs from the Assets tab of the browser, and test loading them in our notebook.</p><h4>Draw train and validation areas of interest (AOIs) with geojson.io</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=8JHpicpLAZOn"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Since we are working with a single image, we need to delineate what sub-areas of the image and labels should be used as training versus validation data for model training.</p><p>Using <a href="http://geojson.io/">geojson.io</a>, we’ll draw ourtrn and val Areas of Interest (AOI) polygons in geojson format and add dataset:trn or dataset:val to the respective polygon properties.</p><p>The finished polygons will look something like this in geojson.io:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*iFliPRar38Vz9wTN" /></figure><p>And here is the exact GeoJSON file I created displayed in github gist:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/73852678d1af98664ec0e87978b09aca/href">https://medium.com/media/73852678d1af98664ec0e87978b09aca/href</a></iframe><p>For demonstration of later steps, I intentionally drew a more complex shape for each AOI but we could have simply drawn adjacent rectangles instead.</p><p>Or in more complex cases, we could choose to draw AOIs of smaller sub-areas that don’t encompass the entire image — for instance, if we want to create training data for specific types of environments like dense urban areas or sparsely populated rural areas only or we want to avoid using poorly labeled areas in our training data.</p><p>Drawing the AOIs as geoJSON polygons in this way gives us the flexibility to choose exactly what and where our training and validation data represents.</p><h4>Convert train and validation AOIs to slippy map tile polygons with supermercado and geopandas</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=FQJZVc7YCkkL"><strong>link to Colab section</strong></a><strong>]</strong></p><p>In this step, we’ll use <a href="https://github.com/mapbox/supermercado">supermercado</a> to generate square tile polygons representing all the <a href="https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames">slippy map tiles</a> at a specified zoom level that overlap the geoJSON training and validation AOIs we created above.</p><p>For this tutorial, we’ll work with slippy map tiles of tile_size=256 and zoom_level=19which yields a manageable number of tiles and satisfactory segmentation results without too much preprocessing or model training time.</p><p>You could also try setting a higher or lower zoom_level which would generate more or less tiles at higher or lower resolutions respectively.</p><p>Here is an example of different tile zoom_levels over the same area of Zanzibar (see the round, white satellite TV dish for a consistently sized visual reference):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/853/1*06aV0V5_-uu0_mQCe13sBA.png" /></figure><p>Learn more about slippy maps <a href="https://wiki.openstreetmap.org/wiki/Slippy_Map">here</a>, <a href="https://developers.planet.com/tutorials/slippy-maps-101/">here</a>, and <a href="https://wiki.openstreetmap.org/wiki/Zoom_levels">here</a>.</p><p>Then we’ll merge our supermercado-generated slippy map tile polygons into a GeoDataFrame with <a href="http://geopandas.org/">geopandas</a>. We’ll also check for and reconcile overlapping train and validation tiles which would otherwise throw off how we evaluate our progress with model training.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-1JDYyTIdEXOgFdY-w062g.png" /></figure><h4>Load slippy map tile image from COG with rio-tiler and corresponding label with geopandas</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=0MuNmPfAEFPD"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Now we’ll use <a href="https://github.com/cogeotiff/rio-tiler">rio-tiler</a> and the slippy map tile polygons generated by supermercado to test load a single 256x256 pixel tile from our znz001 COG image file. We will also load the znz001 geoJSON labels into a geopandas GeoDataFrame and crop the building geometries to only those that intersect the bounds of the tile image:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/262/1*4iP3eSG5Guh3X6-gLcIByQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/285/1*6a3kTW9n1TFfXXQnq2FJ9Q.png" /><figcaption>Slippy map image tile and building geometries at z=19, x=319380, y=270495</figcaption></figure><p>Here is a great introduction to COGs, rio-tiler, and exciting developments in the cloud-native geospatial toolbox by <a href="https://medium.com/u/754c34eee3ad">Vincent Sarago</a> of <a href="https://developmentseed.org/">Development Seed</a>:</p><p><a href="https://medium.com/devseed/cog-talk-part-1-whats-new-941facbcd3d1">COG Talk — Part 1: What’s new?</a></p><p>We’ll then create our corresponding 3-channel RGB mask by passing these cropped geometries to solaris’ df_to_px_mask function. Pixel value of 255 in the generated mask:</p><ul><li>in the 1st (Red) channel represent building footprints,</li><li>in the 2nd (Green) channel represent building boundaries (visually looks yellow on the RGB mask display because the pixels overlap red and green+red=yellow),</li><li>and in the 3rd (Blue) channel represent close contact points between adjacent buildings</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RLJjOZGNxC61M-T3qSqnGQ.png" /></figure><h4>Make and save all the image and mask tiles</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=36_uswSoVYbL"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Now that we’ve successfully loaded one tile image from COG with rio-tiler and created its 3-channel RGB mask with solaris, let’s generate our full training and validation datasets. We’ll write some functions and loops to run through all of our trnand val tiles at zoom_level=19 and save them as lossless png files in the appropriate folders with a filename schema of {save_path}/{prefix}{z}_{x}_{y} so we can easily identify and geolocate what tile each file represents.</p><h3>Train u-net segmentation model with fastai &amp; pytorch</h3><p>As our deep learning framework and library of tools, we’ll use the excellent <a href="https://github.com/fastai/fastai">fastai</a> library built on top of <a href="https://pytorch.org/">PyTorch</a>. For more info:</p><ul><li>about Fast.ai, the organization: <a href="https://www.fast.ai/about/">https://www.fast.ai/about/</a></li><li>direct links to the free MOOC series: <br>- Part 1 (“Practical Deep Learning for Coders”): <a href="https://course.fast.ai/index.html">https://course.fast.ai/index.html</a><br>- Part 2 (“Deep Learning from the Foundations”): <a href="https://course.fast.ai/part2">https://course.fast.ai/part2</a></li></ul><h4>Download and install fastai, set Colab to GPU runtime if needed</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=AHcelzHYwy2F"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Let’s download, install, and set up fastai v1 (currently at <a href="https://github.com/fastai/fastai/blob/master/CHANGES.md">1.0.55</a>). And if we’re not already on it, let’s reset Colab to a GPU runtime (this removes locally stored files since it switches to a new environment so you will have to re-download and untar the training dataset created in above steps):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/632/1*3e2_aNRn9RfVachx5Cf19A.png" /><figcaption>In Menu bar — Runtime &gt; Change Runtime Type &gt; Hardware accelerator: GPU</figcaption></figure><p>Colab’s free GPUs range from a Tesla K80, T4, or T8 depending on their availability. See the ===Hardware=== section of show_install() for what GPU type and how much GPU memory is available which will affect the batch size and training time.</p><p>For all of these GPUs and mem sizes, a batch size of bs=16 at size=256 should train at &lt;2 mins/epoch without encountering out-of-memory issues but if it does comes up, lower the bs to 8.</p><h4>Set up data</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=arxx-MIOwtbA"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Now we’ll set up our training dataset of tile images and masks created above to load correctly into fastai for training and validation. The code in this step tracks closely with that of fastai course’s lesson3-camvid so please refer to that <a href="https://course.fast.ai/videos/?lesson=3">lesson video</a> and <a href="https://nbviewer.jupyter.org/github/fastai/course-v3/blob/master/nbs/dl1/lesson3-camvid.ipynb">notebook</a> for more detailed and excellent explanation by Jeremy Howard about the code and fastai’s <a href="https://docs.fast.ai/data_block.html">Data Block API</a>.</p><p>The main departures from the camvid lesson notebook is the use of filename string parsing to determine which image and mask files comprise the validation data:</p><pre># define the valdation set by fn prefix<br>holdout_grids = [&#39;znz001val_&#39;]<br>valid_idx = [i for i,o in enumerate(fnames) if any(c in str(o) for c in holdout_grids)]</pre><p>And we’ll subclass SegmentationLabelList to alter the behavior of open_mask and PIL.Image underlying it in order to open the 3-channel target masks as RGB images (convert_mode=’RGB’) instead of default greyscale 1-channel images (convert_mode=’L’).</p><p>We’ll also visually confirm that the image files and channels of the respective target mask file are loaded and paired correctly with a display function show_3ch:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/906/1*N52tkMoncZIAlvDHnXw15Q.png" /></figure><h4>Define custom losses and metrics to handle 3-channel target masks</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=emLw4t8-w50t"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Here we implement some new loss functions like <a href="https://arxiv.org/abs/1707.03237">Dice Loss</a> and <a href="https://arxiv.org/abs/1708.02002">Focal Loss</a> which have been shown to perform well in image segmentation tasks. Then we’ll create a MultiChComboLoss class to combine multiple loss functions and calculate them across the 3 channels with adjustable weighting.</p><p>The approach of combining a Dice or Jaccard loss to consider image-wide context with individual pixel-focused Binary Cross Entropy or Focal loss with adjustable weighing of the 3 target mask channels has been shown to consistently outperform single loss functions. This is well-documented by <a href="https://medium.com/u/591210a0b9ce">Nick Weir</a>’s deep dive into the recent <a href="https://spacenetchallenge.github.io/datasets/spacenet-OffNadir-summary.html">SpaceNet 4 Off-Nadir Building Detection</a> top-5 results:</p><p><a href="https://medium.com/the-downlinq/a-deep-dive-into-the-spacenet-4-winning-algorithms-8d611a5dfe25">A deep dive into the SpaceNet 4 winning algorithms</a></p><p>We’ll also adapt our model evaluation metrics (accuracy and dice score) to calculate either a mean score across all channels or for a specified individual channel.</p><h4>Set up model</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=GDrR0C98xuRj"><strong>link to Colab section</strong></a><strong>]</strong></p><p>We’ll set up fastai’s <a href="https://docs.fast.ai/vision.models.unet.html">Dynamic Unet</a> model with an ImageNet-pretrained resnet34 encoder. This architecture, inspired by the original U-net, uses by default many advanced deep learning techniques such as:</p><ul><li>One cycle learning schedule: <a href="https://sgugger.github.io/the-1cycle-policy.html">https://sgugger.github.io/the-1cycle-policy.html</a></li><li>AdamW optimizer: <a href="https://www.fast.ai/2018/07/02/adam-weight-decay/">https://www.fast.ai/2018/07/02/adam-weight-decay/</a></li><li>Pixel shuffle upsampling with <a href="https://arxiv.org/abs/1806.02658">ICNR initiation</a> from super resolution research: <a href="https://medium.com/@hirotoschwert/introduction-to-deep-super-resolution-c052d84ce8cf">https://medium.com/@hirotoschwert/introduction-to-deep-super-resolution-c052d84ce8cf</a></li><li>Optionally set leaky ReLU, blur, self attention: <a href="https://docs.fast.ai/vision.models.unet.html#DynamicUnet">https://docs.fast.ai/vision.models.unet.html#DynamicUnet</a></li></ul><p>We’ll define our MultiChComboLoss function as a balanced combination of Focal Loss and Dice Loss and set our accuracy and dice metrics:</p><pre># set up metrics<br>acc_ch0 = partial(acc_thresh_multich, one_ch=0)<br>dice_ch0 = partial(dice_multich, one_ch=0)<br>metrics = [acc_thresh_multich, dice_multich, acc_ch0, dice_ch0]</pre><pre># combo Focal + Dice loss with equal channel wts<br>learn = unet_learner(data, models.resnet34,<br>                     model_dir=&#39;../../models&#39;,<br>                     metrics=metrics, <br>                     loss_func=MultiChComboLoss(<br>                        reduction=&#39;mean&#39;,<br>                        loss_funcs=[FocalLoss(gamma=1, alpha=0.95),<br>                                    DiceLoss()], <br>                        loss_wts=[1,1],<br>                        ch_wts=[1,1,1])<br>                    )</pre><p>Also note that our metrics displayed during training shows channel-0 (building footprint channel only) accuracy and dice metrics in the right-most 2 columns while the first two accuracy and dice metrics (left-hand columns) show the mean of the respective metric across all 3 channels.</p><h4>Train model, inspect results, unfreeze &amp; train more, export for inference</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=RAeLTpwpyDfv"><strong>link to Colab section</strong></a><strong>]</strong></p><p>First, we’ll fine-tune our Unet on the decoder part only (leaving the weights for the ImageNet-pretrained resnet34 encoder frozen) for some epochs. Then we’ll unfreeze all the trainable weights/layers of our model and train for some more epochs.</p><p>We’ll track the valid_loss, acc_..., and dice_... metrics per epoch as training progresses to make sure they continue to improve and we’re not overfitting. And we set a SaveModelCallback which will track the channel-0 dice score, save a model checkpoint each time there’s an improvement, and reload the highest performing model checkpoint file at the end of training.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vsKx27l_mmRfwLzPf__Feg.png" /></figure><p>We’ll also inspect our model’s results by setting learn.model.eval(), generating some batches of predictions on the validation set, calculating and reshaping the image-wise loss values, and sorting by highest loss first to see the worst performing results (as measured by the loss which may differ in surprising ways from visually gauging results).</p><p><strong>Pro-tip:</strong> display and view your results every chance you get! You’ll pick up on all kinds of interesting clues about your model’s behavior and how to make it better.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/938/1*4qsYToRH8Q-riSFtxuNWIg.png" /></figure><p>Finally, we’ll export our trained Unet segmentation model for inference purposes as a .pkl file. Learn more about exporting fastai models for inference in this tutorial:</p><p><a href="https://docs.fast.ai/tutorial.inference.html">Inference Learner | fastai</a></p><h4>Inference on new imagery</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=iTquwBlYGR_U"><strong>link to Colab section</strong></a><strong>]</strong></p><p>With our segmentation model trained and exported for inference use, we will now re-load it as an inference-only model to test on new unseen imagery. We’ll test the generalizability of our trained segmentation model on tiles from drone imagery captured over another part of Zanzibar and in other parts of the world as well as at varying zoom_levels (locations and zoom levels indicated):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/865/1*DaS2dVfeaxZCg6cqOcHDrg.jpeg" /></figure><p>We’ll also compare our model inference time per tile on GPU versus CPU:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/916/1*kNulsQQdDJAD5xN9Q8qa_A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/914/1*ag6ERcdl-K1-Dj6ddyrBMA.png" /><figcaption>CPU vs GPU inference time per tile</figcaption></figure><h3>Post-processing</h3><h4>Predict on a tile, threshold, polygonize, and georegister</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=4vfnF54gbFlH"><strong>link to Colab section</strong></a><strong>]</strong></p><p>For good evaluation of model performance against ground truth, we’ll use another set of labeled data that the model was not trained on. We’ll get this from the larger Zanzibar dataset. Preview the imagery and ground truth labels for znz029 in the STAC browser <a href="https://geoml-samples.netlify.com/item/9Eiufow7wPXLqQEP1Di2J5X8kXkBLgMsCBoN37VrtRPB/2sEaEKnnyjG2mx7CnN1ESAdjYAEQjoNRxT2vgQRC9oB?si=0&amp;t=preview#14/-5.865178/39.348986">here</a>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*fGsRIu-2ExIWXzc0" /></figure><p>For demonstration, we’ll use this particular <a href="https://tiles.openaerialmap.org/5b1009f22b6a08001185f24a/0/5b1009f22b6a08001185f24b/19/319454/270706.png">tile</a> at z=19, x=319454, y=270706<strong> </strong>from znz029:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/256/0*8zKvrwb9FkU9X-pu.png" /></figure><p>Using solaris and geopandas, we’ll convert our model’s prediction as a 3-channel pixel raster output into a GeoJSON file by:</p><ol><li>thresholding and combining the 3-channels of pixel values in our raw prediction output into a 1 channel binary pixel mask</li><li>polygonizing this binary pixel mask into shape vectors representing the predicted footprint of every building</li><li>georegistering the x, y display coordinates of these vectorized building shapes into longitude, latitude coordinates</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PrR6S4QXmQvbTURleE371w.png" /></figure><h4>Evaluate prediction against ground truth</h4><p><strong>[</strong><a href="https://colab.research.google.com/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb#scrollTo=sndI40Vh8sIy"><strong>link to Colab section</strong></a><strong>]</strong></p><p>Finally with georegistered building predictions as a GeoJSON file, we can evaluate it against the ground truth GeoJSON file for the same tile.</p><p>We’ll clip the ground truth labels to the bounds of this particular tile and use solaris’s Evaluator to calculate the precision, recall, and F1 score. We will also visualize our predicted buildings (in red) against the ground truth buildings (in blue) in this particular tile:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QFTLYTsdLMEi_EdaIOwwEQ.png" /></figure><p>For more information about these common evaluation metrics for models applied to overhead imagery, see the following articles and more by the <a href="https://medium.com/the-downlinq">SpaceNet team</a>:</p><ul><li><a href="https://medium.com/the-downlinq/the-spacenet-metric-612183cc2ddb">The SpaceNet Metric</a></li><li><a href="https://medium.com/the-downlinq/the-good-and-the-bad-in-the-spacenet-off-nadir-building-footprint-extraction-challenge-4c3a96ee9c72">The good and the bad in the SpaceNet Off-Nadir Building Footprint Extraction Challenge</a></li></ul><h3>Ideas to Try for Performance Gains</h3><p>Congratulations, you did it!</p><p>You’ve completed the tutorial and now know how to do everything from producing training data to creating a deep learning model for segmentation to postprocessing and evaluating your model’s performance.</p><p>To flex your newfound knowledge and make your model perform potentially <strong>much better</strong>, try implementing some or all these ideas:</p><ul><li>Create and use more training data: there are 13 grids’ worth of training data for Zanzibar released as part of the <a href="https://docs.google.com/spreadsheets/d/1kHZo2KA0-VtCCcC5tL4N0SpyoxnvH7mLbybZIHZGTfE/edit#gid=0">Open AI Tanzania Building Footprint Segmentation Challenge dataset</a>.</li><li>Change the zoom_level of your training/validation tiles. Better yet, try using tiles across multiple zooms (i.e. z21, z20, z19, z18). Note that with multiple zoom levels over the same imagery, you should be extra careful of overlapping tiles across those different zoom levels. <em>← test your understanding of slippy map tiles by checking that you understand what I mean here but feel free to message me for the answer!</em></li><li>Change the Unet’s encoder to a bigger or different architecture (i.e. resnet50, resnet101, densenet).</li><li>Change the combinations, weighting, and hyperparameters of the loss functions. Or implement completely new loss functions like <a href="https://github.com/bermanmaxim/LovaszSoftmax">Lovasz Loss</a>.</li><li>Try different data augmentation combinations and techniques.</li><li>Train for more epochs and with different learning rate schedules. Try <a href="https://docs.fast.ai/callbacks.fp16.html">mixed-precision</a> for faster model training.</li><li>Your idea here.</li></ul><p>I look forward to seeing what you discover!</p><h3>Coming Up</h3><p>If you liked this tutorial, look forward to next ones which will potentially cover topics like:</p><ul><li>classifying building completeness (foundation, incomplete, complete)</li><li>inference on multiple tiles and much larger images</li><li>working with messy, sparse, imperfect training data</li><li>model deployment and inference at scale</li><li>examining data/model biases, considerations of fairness, accountability, transparency, and ethics</li></ul><p>Curious about more geospatial deep learning topics? Did I miss something? Share your questions and thoughts in the comments so I can add them into this and next tutorials.</p><p>Good luck and happy deep learning!</p><h3>Acknowledgments and Special Thanks to</h3><ul><li><a href="https://www.gfdrr.org/en">World Bank GFDRR</a>’s Open Data for Resilience Initiative (<a href="https://opendri.org/">OpenDRI</a>) for consultation projects which have inspired &amp; informed.</li><li><a href="http://www.zanzibarmapping.com/">Zanzibar Mapping Initiative</a>, <a href="https://openaerialmap.org/">OpenAerialMap</a>, State University of Zanzibar (<a href="https://www.suza.ac.tz/">SUZA</a>), Govt of Zanzibar’s Commission for Lands, &amp; <a href="https://werobotics.org/">WeRobotics</a> for the <a href="https://competitions.codalab.org/competitions/20100">2018 Open AI Tanzania Building Footprint Segmentation Challenge</a>.</li><li><a href="https://www.fast.ai/">Fast.ai’s team</a>, <a href="https://github.com/fastai/fastai/graphs/contributors">contributors</a>, &amp; <a href="https://forums.fast.ai/">community</a> for both “making neural nets uncool again” and pushing its cutting edge (very cool).</li><li><a href="https://spacenet.ai/">SpaceNet</a> &amp; <a href="http://www.cosmiqworks.org/">Cosmiq Works</a> for the open challenges, datasets, knowledge-sharing, <a href="https://github.com/CosmiQ/solaris">Solaris geoML toolkit</a>, &amp; more that advance geospatial machine learning.</li><li>Contributors to <a href="https://www.cogeo.org/">COG</a>, <a href="https://stacspec.org/">STAC</a>, and more initiatives advancing the <a href="https://medium.com/planet-stories/tagged/cloud-native-geospatial">cloud native geospatial</a> ecosystem.</li><li><a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">Free &amp; open source</a> creators &amp; collaborators everywhere for the invaluable public goods you provide.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ae249612c321" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[COG in the Machine: Towards Cloud-Native Geospatial Deep Learning]]></title>
            <link>https://medium.com/@anthropoco/cogs-in-the-machine-towards-cloud-native-geospatial-deep-learning-9bd2f6d843a7?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/9bd2f6d843a7</guid>
            <category><![CDATA[gis]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[humanitarian]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Wed, 25 Jul 2018 15:26:32 GMT</pubDate>
            <atom:updated>2018-07-26T00:51:44.362Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*F9-qC7Pz1rvzej_isKcpdQ.jpeg" /><figcaption>Rapidly mapping the extent of Rohingya refugee camps in Bangladesh with drones, cloud optimized geoTIFFs, &amp; deep learning</figcaption></figure><h3>Imagine</h3><p>You open Google Maps and enter “coffee” to find shops nearby. The app proceeds to download a map of your entire city at the highest detail. You wait minutes and 100s of MBs download to your phone before 4 or 5 location pins drop closest to you.</p><p>If the app did this every time you’re in a new area or search for something different, you would probably stop using it. This scenario is extreme, even absurd, yet we often do something similar with geospatial data in deep learning.</p><p>We download full-sized satellite or aerial imagery (at 100s of MBs to GBs per image or per band), crop, resize, &amp; tile them to the areas, sizes, &amp; formats we need, and run our model training or inference on the end product while holding a relatively large portion of the source data unused.</p><p>If we need the highest resolution &amp; full coverage of all images to train our models, this works. But what if we want to evaluate models on small select subareas of new images or analyze very large areas at faster speed &amp; less detail than what the original file provides?</p><p>Could we access just the relevant areas at only the resolutions we need? Until recently, we had little choice but to download the entirety of every image. Now, thanks to Cloud Optimized GeoTIFFs (COG), we have a better way to run deep learning models on geospatial data more efficiently at any size &amp; scale.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*o3JNXYJ56i_eqXdL-ilsgg.png" /></figure><h3>A Brief Intro to COG</h3><p>In the geospatial world, many exciting developments are moving us towards more interoperable, cloud-native architectures for data processing &amp; analysis:</p><ul><li><a href="http://www.cogeo.org/">Cloud-Optimized GeoTIFF (COG)</a></li><li><a href="https://medium.com/radiant-earth-insights/announcing-the-spatiotemporal-asset-catalog-stac-specification-1db58820b9cf">Spatial Temporal Asset Catalog (STAC)</a></li><li><a href="https://medium.com/planet-stories/analysis-ready-data-defined-5694f6f48815">Analysis Ready Data (ARD)</a></li></ul><p>The whys, whats, and hows of these related initiatives have been well explained by <a href="https://medium.com/u/e27f6e2373d3">Chris Holmes</a> in his 3-part “<a href="https://medium.com/planet-stories/cloud-native-geoprocessing-part-1-the-basics-9670280772c8">Cloud</a>-<a href="https://medium.com/planet-stories/analysis-ready-data-defined-5694f6f48815">Native</a> <a href="https://medium.com/planet-stories/towards-on-demand-analysis-ready-data-f94d6eb226fc">Geoprocessing</a>” series so I won’t get into that here. I will touch on these as they pertain to geospatial deep learning in subsequent posts.</p><p>Today, though, is all about the COG. Summed up in an <a href="https://www.slideshare.net/EugeneCheipesh/cloud-optimized-geottiffs-enabling-efficient-cloud-workflows?ref=https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.slideshare.net%2Fslideshow%2Fembed_code%2Fkey%2Fm1GPsvClzbiAuU&amp;url=https%3A%2F%2Fwww.slideshare.net%2FEugeneCheipesh%2Fcloud-optimized-geottiffs-enabling-efficient-cloud-workflows&amp;image=https%3A%2F%2Fcdn.slidesharecdn.com%2Fss_thumbnails%2Ffoss4gna2018-cloudoptimizedgeotiffs-180518155909-thumbnail-4.jpg%3Fcb%3D1526659294&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=slideshare">excellent presentation</a> on efficient cloud workflows by <a href="https://twitter.com/echeipesh?lang=en">@echeipesh</a> from <a href="https://geotrellis.io/">GeoTrellis</a>, we want this:</p><blockquote>“Hey, let’s not download the whole file every time.”</blockquote><p>COGs deliver: accessing huge geospatial files becomes a speedy &amp; selective data-streaming experience using the same <a href="https://en.wikipedia.org/wiki/Byte_serving">web tech</a> that enable videos to start playing before the whole file is downloaded. Here are some <a href="https://medium.com/radiant-earth-insights/cloud-optimized-geotiff-advances-6b01750eb5ac">recent advances &amp; implementations</a> showing what’s newly possible:</p><figure><a href="http://www.cogeo.org/map/"><img alt="" src="https://cdn-images-1.medium.com/proxy/1*5ows-T6hA_SStP1X_0N1Cw.gif" /></a><figcaption><a href="http://www.cogeo.org/map/">COG Map</a> viewer [<a href="https://medium.com/radiant-earth-insights/cog-map-and-tiles-rdnt-io-ad0745388a14">via Chris Holmes</a>]</figcaption></figure><h3>COG for Deep Learning</h3><p>The functionality we’ll focus on for deep learning is how COGs use <a href="http://www.cogeo.org/in-depth.html">tiling &amp; overviews</a>:</p><blockquote><strong>Tiling</strong> creates a number of internal ‘tiles’ inside the actual image, instead of using simple ‘stripes’ of data. With a stripe of data then the whole file needs to be read to get the key piece. With tiles much quicker access to a certain area is possible, so that just the portion of the file that needs to be read is accessed.</blockquote><blockquote><strong>Overviews</strong> create downsampled versions of the same image. This means it’s ‘zoomed out’ from the original image — it has much less detail (1 pixel where the original might have 100 or 1000 pixels), but is also much smaller. Often a single GeoTIFF will have many overviews, to match different zoom levels. These add size to the overall file, but are able to be served much faster, since the renderer just has to return the values in the overview instead of figuring out how to represent 1000 different pixels as one.</blockquote><p>The organization of these tiles &amp; overviews delivered by a COG tile server (like <a href="https://github.com/radiantearth/tiles.rdnt.io">tiles.rdnt.io</a> thanks to <a href="https://medium.com/u/c1c2084fd8a">Radiant.Earth</a> &amp; <a href="https://medium.com/u/b31826d62b99">Seth Fitzsimmons</a>) generally follows the <a href="https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames">slippy map tile naming convention</a>:</p><ul><li>Tiles are 256 × 256 px or 512 × 512 px PNG or JPG files</li><li>Filename(url) format is /{zoom}/{x}/{y}.png</li><li>Each zoom level is a directory {zoom}, each column is a subdirectory {x}, and each tile in that column is a file {y}</li></ul><p>Hmm, 512 px &amp; 256 px square images. Starting to sound familiar?</p><p>COG tiles &amp; overviews hand us geospatial data on a platter: consistently georeferenced &amp; internally organized, optimized for fast access &amp; visibility at every zoom level, and formatted in a familiar way for deep learning models.</p><p>To test drive the potential, I created an <strong>Input COG → Model Inference → Output COG</strong> workflow that:</p><ol><li>gets overview tiles from any COG at any zoom level (or multiple levels)</li><li>runs inference on each tile and reassembles the results</li><li>saves the output as a properly georeferenced and validated COG file</li></ol><h3>An Example, Step-by-Step</h3><p>Here’s that workflow in action. We’ll use this 7-cm resolution <a href="https://map.openaerialmap.org/#/92.16233253479002,21.19499540852437,16/square/132200013002/5a260dfcbac48e5b1c528bb3?_k=30mn2q">drone image</a> taken by the UN’s International Org. for Migration of the <a href="https://data.humdata.org/dataset/outline-of-camps-sites-of-rohingya-refugees-in-cox-s-bazar-bangladesh">Rohingya refugee camps near Cox’s Bazar, Bangladesh</a> hosted as a COG on <a href="http://openaerialmap.org">OpenAerialMap</a>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/408/0*wY6wZqrFy5Ye0Kjm.png" /></figure><h4>1. Get tiles at zoom levels 17, 18, &amp; 19 served by <a href="https://github.com/radiantearth/tiles.rdnt.io">tiles.rdnt.io</a></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/856/1*yvKTeEKurL2CWTHiNg_ixA.png" /><figcaption>Single example tiles (from top-left of original image) at 3 zoom levels</figcaption></figure><h4>2. Run inference per tile with model trained offline to find built-up areas* (binary semantic segmentation)</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/865/1*jHNBmf9tPCdZGm1XtV9YlQ.png" /><figcaption>Model inference results on single example tiles at 3 zoom levels</figcaption></figure><p>* model used for demo of workflow; not fitted to this data so results may appear suboptimal. Model training &amp; inference will be covered in a later post.</p><h4><strong>3. Reassemble tiled results into full output map at each zoom level</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/865/1*fFgCtEJcFDJKe0zQFtKzdg.png" /><figcaption>Tiles reassembled to full model outputs at each of 3 zoom levels</figcaption></figure><h4>4. Ensemble into final output map (with new color range)</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/1*um4eLrTi4wsfZQUsItKQWA.png" /><figcaption>Original image next to final output map (average of 3 zoom levels) showing built-up areas in red</figcaption></figure><h4>5. Calculate geo bounds &amp; save as georeferenced COG</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NMmgdRq5zcLU3ShpBe8cRw.png" /><figcaption>Output COG properly georeferenced &amp; displayed on basemap [<a href="http://tiles.rdnt.io/preview?url=https://www.dropbox.com/s/m58qdtbx0yn5q1q/coxbazar2_oam_z19_cog.tif?dl=1#18/21.195935363182905/92.1591567993164">preview COG</a>]</figcaption></figure><h3>How Fast Is It?</h3><p>Running the example workflow on a remote GPU instance (<a href="http://www.paperspace.com">Paperspace</a>’s P5000 machine) took ~30 seconds:</p><pre>starting inference for zoom level: 17</pre><pre>100%|██████████| 15/15 [00:01&lt;00:00,  9.36it/s]</pre><pre>starting inference for zoom level: 18</pre><pre>100%|██████████| 50/50 [00:10&lt;00:00,  4.78it/s]</pre><pre>starting inference for zoom level: 19</pre><pre>100%|██████████| 152/152 [00:13&lt;00:00, 11.04it/s]</pre><pre>CPU times: user 14.2 s, sys: 968 ms, total: 15.1 s<br><strong>Wall time: 25.9 s</strong></pre><p>I was also impressed at the reasonable speed using CPU only: ~12 minutes on my run-of-mill 2015 Macbook Pro even though code is not optimized for performance:</p><pre>CPU times: user 14min 51s, sys: 1min 26s, total: 16min 17s<br><strong>Wall time: 11min 43s</strong></pre><p>It’s faster because using COG overview tiles at specific resolutions (zoom levels) directly gets the information we need while avoiding the unnecessary &amp; heavy data management typical of preparing geospatial data for deep learning.</p><p>In the 1st example, the original COG file size is only 30MB which wouldn’t be a very perceptible difference. Using a much larger example, <a href="https://map.openaerialmap.org/#/92.16232180595398,21.19499540852437,16/square/132200013002/5a89a2915a9ef7cb5d6de668?_k=qz1zfb">this 900MB source image</a> of a bigger area was processed in 90 seconds on GPU. Working traditionally, I would still be downloading the file (at <a href="http://www.speedtest.net/global-index">typical broadband speeds</a>):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZBMDm7uQUVRtLh6DH4VZsw.png" /><figcaption>Mapping a larger area of Rohingya refugee camps in Bangladesh from drone imagery [<a href="http://tiles.rdnt.io/preview?url=https://www.dropbox.com/s/pgaj0kq7dib7rq4/coxbazar_oam1_z17_2_cog.tif?dl=1#15/21.1908/92.1544">preview COG</a>]</figcaption></figure><p>A zoomed-in &amp; overlaid view of the same output as above shows actionable details (where built-up areas are) are preserved up close:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PkbRaAm-MvmsTCIaCHCGlw.jpeg" /><figcaption>Zoomed-in, overlaid model view of Rohingya refugee camp built-up areas (red) vs not (blue)</figcaption></figure><h3>Oh, the Possibilities</h3><p>COGs enable us to run our deep learning models more rapidly, lightly, &amp; simply on geospatial data at any size or scale.</p><p>This data includes satellite &amp; aerial imagery from <a href="https://registry.opendata.aws/landsat-8/">Landsat on AWS</a>, <a href="https://medium.com/planet-stories/cng-part-4-open-aerial-maps-cloud-native-geospatial-architecture-a7f784cf7c2f">OpenAerialMap</a>, &amp; <a href="https://medium.com/planet-stories/a-handy-introduction-to-cloud-optimized-geotiffs-1f2c9e716ec3">Planet</a> and more soon as providers increasingly go to COG:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z0sGDFHLe-OYZilj-xb2YA.png" /><figcaption>COG-based inference on Planet SkySat satellite image of Freeport, Texas after Hurricane Harvey [<a href="http://tiles.rdnt.io/preview?url=https://www.dropbox.com/s/xczl074g7cqsj4d/harvey_planet_z14_2_cog.tif?dl=1#13/29.0039/-95.3799">preview COG</a>]</figcaption></figure><p>The advance of COG for deep learning means that we could:</p><ul><li>work selectively with any-sized subareas and as many zoom levels of source imagery as we need to get useful results.</li><li>select single spectral bands or mix-and-match any band combination with one change in the tile server parameter (i.e. rgb=1,1,1).</li><li>create new models by serving COG tiles directly into our training pipelines with labels generated on the fly, perhaps via geospatial machine learning data prep tools such as <a href="https://github.com/mapbox/robosat">Robosat</a> or <a href="https://github.com/developmentseed/label-maker">Label Maker</a>.</li><li>test many models on imagery of one area, or one model on many areas and a wide gamut of visual conditions to evaluate their generalizability to real world data.</li><li>deploy models for any COG data provider with less cost &amp; infrastructure (cloud-based, CPU only), making it more feasible to AI-enhance many localized humanitarian, environmental, &amp; community-based geospatial projects like those being carried out by <a href="https://werobotics.org/">WeRobotics</a>’ <a href="https://werobotics.org/flying-labs/">Flying Labs</a>.</li></ul><p>In following posts, we’ll cover this workflow in technical detail (with code examples), experiment with these possibilities, and encourage more new ideas for cloud-native geospatial deep learning.</p><p>I look forward to seeing &amp; sharing what you come up with!</p><p><em>Like what you’re reading? Want to protect our health &amp; prepare our communities for climate change? If you’re looking to do your best work in geospatial analysis &amp; deep learning to tackle our hardest systems challenges in environmental health &amp; justice, </em><a href="https://www.anthropo.co/"><em>Anthropocene Labs</em></a><em> is looking for you! Get in touch with dave(at)anthropo dot co</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9bd2f6d843a7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[281,893 Acres]]></title>
            <link>https://medium.com/delta-anthropoco/281-893-acres-9eab00fefc9?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/9eab00fefc9</guid>
            <category><![CDATA[earth]]></category>
            <category><![CDATA[gis]]></category>
            <category><![CDATA[fire]]></category>
            <category><![CDATA[california]]></category>
            <category><![CDATA[satellite-imagery]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Tue, 13 Feb 2018 20:15:25 GMT</pubDate>
            <atom:updated>2018-02-13T23:06:16.491Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*K4X8dz1PLnpKtxMwhaBR3A.gif" /><figcaption>California’s largest wildfire on record (the Thomas Fire in southern CA), 10/2017 to 1/2018. Satellite imagery in false-color composite (SWIR2-NIR-Blue) to visualize active fires and burn scar (red) on vegetation (green) and buildings (grey). Landsat-8 data courtesy of US Geological Survey.</figcaption></figure><blockquote><em>The fire burned for more than a month, though its spread was contained several weeks ago. Heavy rains earlier this week, which caused land burned by the fire to create mudflows that buried neighborhoods, helped fully extinguish the blaze. In the end, the fire burned 281,893 acres.</em></blockquote><blockquote><em>The fire eclipsed the 2003 Cedar fire in San Diego County, which burned 273,246 acres.</em></blockquote><blockquote><em>The milestone reaffirmed 2017 as the most destructive fire season in the state. In October, a series of fires in wine country burned more than 10,000 homes and killed more than 40 people.</em></blockquote><blockquote><em>Those blazes, along with the Thomas fire, were fueled by dry conditions and intense winds.</em></blockquote><p><a href="http://www.latimes.com/local/lanow/la-me-thomas-fire-contained-20180112-story.html">http://www.latimes.com/local/lanow/la-me-thomas-fire-contained-20180112-story.html</a></p><iframe src="https://cdn.embedly.com/widgets/media.html?url=https%3A%2F%2Fcdn.knightlab.com%2Flibs%2Fjuxtapose%2Flatest%2Fembed%2Findex.html%3Fuid%3Dd95940d6-10f7-11e8-b263-0edaf8f81e27&amp;src=https%3A%2F%2Fcdn.knightlab.com%2Flibs%2Fjuxtapose%2Flatest%2Fembed%2Findex.html%3Fuid%3Dd95940d6-10f7-11e8-b263-0edaf8f81e27&amp;type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=knightlab" width="700" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/5f9ebab774c9a35268c6961af2d0d9de/href">https://medium.com/media/5f9ebab774c9a35268c6961af2d0d9de/href</a></iframe><figure><a href="https://en.wikipedia.org/wiki/Thomas_Fire#/media/File:2017_12_11-08.57.46.111-CST.jpg"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xZ0-YJGXgAinLVSvL4qzcg.jpeg" /></a><figcaption>Thomas Fire progression map as of Dec 25 2017 produced by <a href="http://calfire.ca.gov/">California Dept. of Forestry and Fire Protection</a> [via <a href="https://en.wikipedia.org/wiki/Thomas_Fire#/media/File:2017_12_11-08.57.46.111-CST.jpg">Wikipedia</a>]</figcaption></figure><h4>License, Sources, and Technical Notes:</h4><ul><li>Animated GIF and Juxtapose images licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a></li><li>Landsat-8 bands combination: 7 (shortwave infrared 2), 5 (near infrared), 2 (blue)</li><li>Source GeoTiff files from <a href="https://cloud.google.com/storage/docs/public-datasets/landsat">Google Cloud Public Datasets</a>: <br> ‘LC08_L1TP_042036_20171022_20171107_01_T1’,<br> ‘LC08_L1TP_042036_20171209_20171223_01_T1’,<br> ‘LC08_L1TP_042036_20171225_20180103_01_T1’,<br> ‘LC08_L1TP_042036_20180126_20180207_01_T1’</li><li>Made in Python with <a href="http://jupyter.org/">Jupyter notebook</a>s, <a href="https://github.com/sat-utils">sat-utils</a>, <a href="http://geojson.io/">geojson.io</a>, <a href="https://github.com/mapbox/rasterio">rasterio</a>, <a href="https://ezgif.com/">ezgif.com</a>, <a href="https://juxtapose.knightlab.com/">juxtaposeJS</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9eab00fefc9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/delta-anthropoco/281-893-acres-9eab00fefc9">281,893 Acres</a> was originally published in <a href="https://medium.com/delta-anthropoco">Delta Anthropoco</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Then we had water, all the time.]]></title>
            <link>https://medium.com/delta-anthropoco/then-we-had-water-all-the-time-a094a3445484?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/a094a3445484</guid>
            <category><![CDATA[satellite-imagery]]></category>
            <category><![CDATA[environment]]></category>
            <category><![CDATA[climate-change]]></category>
            <category><![CDATA[louisiana]]></category>
            <category><![CDATA[refugees]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Fri, 09 Feb 2018 19:13:45 GMT</pubDate>
            <atom:updated>2018-02-09T19:15:51.224Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hOw-M_AiN2CeSN4YvNEDyg.gif" /><figcaption><a href="https://www.google.com/maps/place/Isle+de+Jean+Charles/@29.2499625,-90.5992216,10.86z/data=!4m5!3m4!1s0x86205634bfecfa91:0xe5d74485a48d3e49!8m2!3d29.3982338!4d-90.4891055">Isle de Jean Charles</a> &amp; its waters, time-lapse 2013 to 2018. Rotated clockwise (north points to the right) &amp; in false color (near infrared, blue, coastal blue) to better contrast vegetation and land (red, gray/white) from water (green, blue, black). Landsat-8 data courtesy of US Geological Survey.</figcaption></figure><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F107174876&amp;dntp=1&amp;url=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F107174876&amp;image=http%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F490506765_1280.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/827ab1c631bd4bf64e547d89e72d5c61/href">https://medium.com/media/827ab1c631bd4bf64e547d89e72d5c61/href</a></iframe><blockquote><em>Levees stopped the natural flow of fresh water and sediment that reinforced the fragile marshes. Oil and gas companies dredged through the mud to lay pipelines and build canals, carving paths for saltwater to intrude and kill the freshwater vegetation that held the land together. The unstoppable, glacial momentum of sea-level rise has only made things worse. Today, almost nothing remains of what was very recently a vast expanse of bountiful marshes and swampland.</em></blockquote><blockquote><em>Isle de Jean Charles, home to the Biloxi-Chitimacha-Choctaw band of Native Americans, has lost 98 percent of its land since 1955. Its 99 remaining residents have been dubbed </em><a href="http://www.nytimes.com/2016/05/03/us/resettling-the-first-american-climate-refugees.html"><em>“America’s first climate refugees.</em></a><em>”</em></blockquote><p>…</p><blockquote><em>The residents of Isle de Jean Charles won’t be alone in their exodus. There will be up to</em><a href="https://www.elementascience.org/articles/10.1525/elementa.234/"><em> 13 million climate refugees</em></a><em> in the United States by the end of this century. Even if humanity were to stop all carbon emissions today, at least 414 towns, villages, and cities across the country would face relocation, according to </em><a href="http://www.pnas.org/content/112/44/13508.full.pdf"><em>a study</em></a><em> published in the </em>Proceedings of the National Academy of Sciences<em>. If the West Antarctic Ice Sheet collapses, researchers predict that the number will exceed 1,000.</em></blockquote><blockquote><em>And this isn’t a distant threat. At least 17 communities, most of which are Native American or Native Alaskan, are already in the process of climate-related relocations. Yet despite its inevitability, there is no official framework to handle this displacement. There is no U.S. government agency, process, or funding dedicated to confronting this impending humanitarian crisis.</em></blockquote><blockquote><em>Only one climate-related relocation is currently funded and administered by the government: the Isle de Jean Charles Resettlement Project.</em></blockquote><p>- “How to Save a Town From Rising Waters” [<a href="https://twitter.com/misaacstein">Michael Isaac Stein</a> at <a href="https://www.citylab.com/environment/2018/01/how-to-save-a-town-from-rising-waters/547646/">CityLab</a>]</p><p>Learn more at:</p><ul><li><a href="https://vimeo.com/ondemand/cantstopthewater">https://vimeo.com/ondemand/cantstopthewater</a></li><li><a href="http://www.coastalresettlement.org/">http://www.coastalresettlement.org</a>/</li><li><a href="https://www.motherjones.com/environment/2017/10/climate-refugees-trump-hud/">https://www.motherjones.com/environment/2017/10/climate-refugees-trump-hud/</a></li></ul><h4>License, Sources, and Technical Notes:</h4><ul><li>Animated GIF at top licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a></li><li>Landsat-8 bands combination: 5 (near infrared), 2 (blue), 1 (coastal blue)</li><li>Source GeoTiff files from <a href="https://cloud.google.com/storage/docs/public-datasets/landsat">Google Cloud Public Datasets</a>: <br> ‘LC08_L1TP_022040_20131218_20170307_01_T1’,<br> ‘LC08_L1TP_022040_20140409_20170307_01_T1’,<br> ‘LC08_L1TP_022040_20141119_20170302_01_T1’,<br> ‘LC08_L1TP_022040_20150207_20170301_01_T1’,<br> ‘LC08_L1TP_022040_20150919_20170225_01_T1’,<br> ‘LC08_L1TP_022040_20160210_20170224_01_T1’,<br> ‘LC08_L1TP_022040_20160226_20170224_01_T1’,<br> ‘LC08_L1TP_022040_20160313_20170224_01_T1’,<br> ‘LC08_L1TP_022040_20161210_20170219_01_T1’,<br> ‘LC08_L1TP_022040_20171026_20171107_01_T1’,<br> ‘LC08_L1TP_022040_20171127_20171206_01_T1’,<br> ‘LC08_L1TP_022040_20180114_20180120_01_T1’,</li><li>Made in Python with <a href="http://jupyter.org/">Jupyter notebook</a>s, <a href="https://github.com/sat-utils">sat-utils</a>, <a href="http://geojson.io/">geojson.io</a>, <a href="https://github.com/mapbox/rasterio">rasterio</a>, <a href="https://ezgif.com/">ezgif.com</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a094a3445484" width="1" height="1" alt=""><hr><p><a href="https://medium.com/delta-anthropoco/then-we-had-water-all-the-time-a094a3445484">Then we had water, all the time.</a> was originally published in <a href="https://medium.com/delta-anthropoco">Delta Anthropoco</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[California Breathing, 2017 in Two Counties]]></title>
            <link>https://medium.com/delta-anthropoco/california-breathing-2017-in-two-counties-7c3cffb69566?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/7c3cffb69566</guid>
            <category><![CDATA[environment]]></category>
            <category><![CDATA[public-health]]></category>
            <category><![CDATA[agriculture]]></category>
            <category><![CDATA[california]]></category>
            <category><![CDATA[satellite-imagery]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Wed, 07 Feb 2018 22:35:04 GMT</pubDate>
            <atom:updated>2018-02-07T23:26:59.561Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*a8HEHzhc-oSGDCxhOaHIiA.gif" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*2n6p0KtNGFU8C9LFzBiB1g.gif" /><figcaption>Imperial County (left) &amp; Sutter County (right), California over 2017. Natural color time-lapses created from Landsat-8 data courtesy of US Geological Survey.</figcaption></figure><p>Time-lapse satellite imagery of two agricultural Californian counties over 2017 in natural color.</p><p>Left:<strong> </strong><a href="https://en.wikipedia.org/wiki/Imperial_County,_California">Imperial County</a> in southeastern California next to the Salton Sea.</p><p>Right: <a href="https://en.wikipedia.org/wiki/Sutter_County,_California">Sutter County</a> in northern California’s Sacramento Valley.</p><p>A contrast of landscape, geography, and childhood asthma rates:</p><figure><a href="https://letsgethealthy.ca.gov/goals/healthy-beginnings/reducing-childhood-asthma/"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_1EpaXHOtQDkMTfQ83es5w.gif" /></a><figcaption>ED Visits due to Asthma per 10,000 Children and Adolescents by county, 2016 (most recent data available). Map &amp; data courtesy of <a href="https://letsgethealthy.ca.gov/goals/healthy-beginnings/reducing-childhood-asthma/">Let’s Get Healthy California</a></figcaption></figure><h4>License, Sources, and Technical Notes:</h4><ul><li>First two animated GIFs licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a></li><li>Landsat-8 bands combination: 4 (red), 3 (green), 2 (blue)</li><li>Source GeoTiff files from <a href="https://cloud.google.com/storage/docs/public-datasets/landsat">Google Cloud Public Datasets</a>: <br> Imperial:<br> ‘LC08_L1TP_039037_20170118_20170218_01_T1’,<br> ‘LC08_L1TP_039037_20170307_20170317_01_T1’,<br> ‘LC08_L1TP_039037_20170526_20170615_01_T1’,<br> ‘LC08_L1TP_039037_20170729_20170811_01_T1’,<br> ‘LC08_L1TP_039037_20170915_20170928_01_T1’,<br> ‘LC08_L1TP_039037_20171118_20171205_01_T1’,<br> Sutter:<br> ‘LC08_L1TP_044033_20170121_20170218_01_T1’,<br> ‘LC08_L1TP_044033_20170427_20170515_01_T1’,<br> ‘LC08_L1TP_044033_20170614_20170628_01_T1’,<br> ‘LC08_L1TP_044033_20170817_20170826_01_T1’,<br> ‘LC08_L1TP_044033_20171004_20171014_01_T1’,<br> ‘LC08_L1TP_044033_20171207_20171223_01_T1’,</li><li>Made in Python with <a href="http://jupyter.org/">Jupyter notebook</a>s, <a href="http://geojson.io/">geojson.io</a>, <a href="https://github.com/mapbox/rasterio">rasterio</a>, <a href="https://ezgif.com/">ezgif.com</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7c3cffb69566" width="1" height="1" alt=""><hr><p><a href="https://medium.com/delta-anthropoco/california-breathing-2017-in-two-counties-7c3cffb69566">California Breathing, 2017 in Two Counties</a> was originally published in <a href="https://medium.com/delta-anthropoco">Delta Anthropoco</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Once Every 1,000 Years]]></title>
            <link>https://medium.com/delta-anthropoco/cape-town-once-every-1000-years-2c53266898a0?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/2c53266898a0</guid>
            <category><![CDATA[water]]></category>
            <category><![CDATA[climate-change]]></category>
            <category><![CDATA[earth]]></category>
            <category><![CDATA[satellite-imagery]]></category>
            <category><![CDATA[space]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Mon, 05 Feb 2018 04:49:09 GMT</pubDate>
            <atom:updated>2018-02-05T18:50:19.449Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/930/1*vskEF-M4q54duo4W0yQX8w.gif" /><figcaption>Cape Town &amp; its largest reservoir, Theewaterskloof, from Dec 2013 to Jan 2018. Landsat-8 images displayed in false color (Near Infrared-Green-Blue) courtesy of U.S. Geological Survey</figcaption></figure><h4><strong>Annotated version:</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/930/1*jd3pu2g9EySCjDVEhnE46g.gif" /><figcaption>Landsat-8 images displayed in false color (Near Infrared-Green-Blue) courtesy of U.S. Geological Survey</figcaption></figure><p>A time-lapse of satellite images showing Cape Town’s largest fresh water reservoir (<a href="https://www.google.com/maps/place/Theewaterskloof+Dam,+South+Africa/@-34.0664559,19.1053589,11.16z/data=!4m5!3m4!1s0x1dcdec735f9bc571:0x303d47b057bd0b08!8m2!3d-34.078056!4d19.289167">Theewaterskloof Dam</a>) depleting to historical lows after 3 successive dry years.</p><p>Images are generated as a false color combination of Near Infrared, Green, and Coastal/Aerosol (Landsat-8 bands 5, 3, 1) to highlight vegetation (red, orange) and coastal/inland waters (shades of blue).</p><p>As of Feb 2, 2018, the reservoir is at <a href="http://www.capetown.gov.za/Family%20and%20home/residential-utility-services/residential-water-and-sanitation-services/this-weeks-dam-levels">12.2% capacity</a>. The city of Cape Town currently forecasts Day Zero — “<a href="http://coct.co/water-dashboard/">the day the taps will be turned off</a>” — to arrive Apr 16 2018 . If and when that day comes, the city plans to provide 200 water distribution stations to allocate 25 liters per person per day. The average American uses over 300 liters of water per day.</p><p>Learn more below:</p><h3>“Cape Town’s Reservoirs Are Getting Terrifyingly Low”</h3><p>[<a href="https://earther.com/cape-towns-reservoirs-are-getting-terrifyingly-low-1822674207">Earther.com</a>]</p><blockquote>“As things stand, the challenge exceeds anything a major city has had to face anywhere in the world since the Second World War or 9/11,” Helen Zille, the Premier of the Western Cape, <a href="https://www.dailymaverick.co.za/opinionista/2018-01-22-from-the-inside-the-countdown-to-day-zero/#.WnS83pM-csm">wrote in an op-ed</a> late last month. “I personally doubt whether it is possible for a city the size of Cape Town to distribute sufficient water to its residents, using its own resources, once the underground waterpipe network has been shut down.”</blockquote><blockquote>The city got to this point because rains have failed the past three years. Last year was Cape Town’s driest year on record, taking the record from 2016. Oh, and 2015 was the fourth-driest year ever recorded at the city’s main weather station at the airport.</blockquote><h3>“How severe is this drought, really?”</h3><p>[<a href="http://www.csag.uct.ac.za/2017/08/28/how-severe-is-this-drought-really/">Piotr Wolski at Climate System Analysis Group</a>]</p><figure><a href="http://www.csag.uct.ac.za/current-seasons-rainfall-in-cape-town/"><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*j1CiSU6NOKo1rDm6qCswGA.png" /></a><figcaption>Cape Town Accumulated Daily Rainfall (2014–2017), generated Feb 4 2018</figcaption></figure><blockquote>Some time ago, I’ve heard a story told by an old Kenyan herdsman. The Madala said that every now and then, his village experienced a cattle-killing drought. Every decade, or so, they experienced a goat-killing drought (goats being more sturdy than cows, would die only if conditions were more extreme). And once in a man’s (woman’s of course too) lifetime, they would experience a man-killing drought.</blockquote><blockquote>So is Cape’s drought cattle-, goat- or man-killing one? This question can, of course, be rephrased in scientific terms — what is the return interval of the drought we are experiencing? Which basically means: how often, on average, can we expect a drought of a magnitude of the one that we have now, or more severe, to occur?</blockquote><h3>“Cape Town’s Water is Running Out“</h3><p>[<a href="https://earthobservatory.nasa.gov/IOTD/view.php?id=91649">NASA Earth Observatory Image of the Day</a>]</p><figure><a href="https://earthobservatory.nasa.gov/IOTD/view.php?id=91649"><img alt="" src="https://cdn-images-1.medium.com/max/720/1*rEqh3BMotSko-nPUUsUkMQ.gif" /></a><figcaption>Credit: NASA Earth Observatory Image of the Day Jan 30, 2018</figcaption></figure><blockquote><a href="http://www.csag.uct.ac.za/author/pwolski/">Piotr Wolski</a>, a hydrologist at the Climate Systems Analysis Group at the University of Cape Town, <a href="http://www.csag.uct.ac.za/2017/08/28/how-severe-is-this-drought-really/">has analyzed rainfall records</a> dating back to 1923 to get a sense of the severity of the current drought compared to historical norms. His conclusion is that back-to-back years of such weak rainfall (like 2016–17) typically happens about once just every 1,000 years.</blockquote><blockquote>Population growth and a lack of new infrastructure has exacerbated the current water shortage. Between 1995 and 2018, the Cape Town’s population swelled by roughly 80 percent. During the same period, dam storage increased by just 15 percent.</blockquote><blockquote>The city did recently <a href="https://www.news24.com/SouthAfrica/News/water-department-to-fast-track-western-cape-schemes-to-beat-day-zero-20171004">accelerate development of a plan</a> to increase capacity at Voëlvlei Dam by diverting winter rainfall from the Berg River. The project had been scheduled for completion in 2024, but planners are now targeting 2019. The city is also working to <a href="http://ewn.co.za/2018/01/10/ct-hopes-to-have-3-desalination-plants-running-by-march">build a series of desalination plants</a> and to drill new groundwater wells that could produce additional water.</blockquote><h4><strong>License, Sources, and Technical Notes:</strong></h4><ul><li>First two animated GIFs licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a></li><li>Landsat-8 bands combination: 5 (near infrared), 3 (green), 1 (coastal/aerosol)</li><li>Source GeoTiff files from <a href="https://cloud.google.com/storage/docs/public-datasets/landsat">Google Cloud Public Datasets</a>: <br>‘LC08_L1TP_175084_20131218_20170427_01_T1’,<br>‘LC08_L1TP_175084_20141018_20170418_01_T1’,<br>‘LC08_L1TP_175084_20150223_20170412_01_T1’,<br>‘LC08_L1TP_175084_20160109_20170405_01_T1’,<br>‘LC08_L1TP_175084_20161226_20170315_01_T1’,<br>‘LC08_L1TP_175084_20171010_20171024_01_T1’,<br>‘LC08_L1TP_175084_20171127_20171206_01_T1’,<br>‘LC08_L1TP_175084_20180114_20180120_01_T1’</li><li>Made in Python with <a href="http://jupyter.org/">Jupyter notebook</a>s, <a href="http://geojson.io/">geojson.io</a>, <a href="https://github.com/mapbox/rasterio">rasterio</a>, <a href="https://ezgif.com/">ezgif.com</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2c53266898a0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/delta-anthropoco/cape-town-once-every-1000-years-2c53266898a0">Once Every 1,000 Years</a> was originally published in <a href="https://medium.com/delta-anthropoco">Delta Anthropoco</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hello Anthropocene]]></title>
            <link>https://medium.com/@anthropoco/hello-anthropocene-ce8e149e8e7d?source=rss-1ab5b7b60071------2</link>
            <guid isPermaLink="false">https://medium.com/p/ce8e149e8e7d</guid>
            <category><![CDATA[humanity]]></category>
            <category><![CDATA[awareness]]></category>
            <category><![CDATA[anthropocene]]></category>
            <category><![CDATA[exploration]]></category>
            <category><![CDATA[science]]></category>
            <dc:creator><![CDATA[Dave Luo]]></dc:creator>
            <pubDate>Fri, 12 May 2017 10:24:14 GMT</pubDate>
            <atom:updated>2018-02-05T23:58:00.539Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*O8ZDTIsrlDuE0YwSgGu94g.jpeg" /><figcaption>Photo credit: <a href="http://www.smithsonianmag.com/science-nature/have-humans-really-created-new-geologic-age-180952865/">Frans Lanting/Corbis via Smithsonian Mag</a></figcaption></figure><blockquote>“Two billion years ago, cyanobacteria oxygenated the atmosphere and powerfully disrupted life on Earth. But they didn’t know it.</blockquote><blockquote>We’re the first species that’s become a planet-scale influence and is aware of that reality. That’s what distinguishes us.”</blockquote><blockquote>- <a href="https://twitter.com/revkin">Andrew Revkin</a> (<a href="https://medium.com/@revkin">@revkin</a>) via <a href="http://www.smithsonianmag.com/science-nature/what-is-the-anthropocene-and-are-we-in-it-164801414/#Y26vrvlfuqJgjFID.99">Smithsonian</a></blockquote><p>Exploring <em>anthropos’ </em>influence and raising our fledging self-awareness, starting with this one human. Hi[at]anthropo[dot]co.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ce8e149e8e7d" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>