<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Vertexwahn on Medium]]></title>
        <description><![CDATA[Stories by Vertexwahn on Medium]]></description>
        <link>https://medium.com/@Vertexwahn?source=rss-6a0c100abfe0------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 15 Apr 2026 01:52:38 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@Vertexwahn/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Blender to Okapi Export — Part 1]]></title>
            <link>https://medium.com/@Vertexwahn/blender-to-okapi-export-part-1-2bab7c9ff4d5?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/2bab7c9ff4d5</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[blender]]></category>
            <category><![CDATA[ray-tracing]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Sun, 04 May 2025 17:36:47 GMT</pubDate>
            <atom:updated>2025-12-18T19:37:15.124Z</atom:updated>
            <content:encoded><![CDATA[<h3>Blender to Okapi Export — Part 1</h3><h4>How to develop a Blender export Add-on</h4><p><a href="https://vertexwahn.de/page/okapi/">Okapi Renderer</a> is a toy renderer I developed. It is closed-source, but there is also a stripped-down <a href="https://github.com/Vertexwahn/OkapiRT">Open-Source version of it</a>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FXPEok9Gad1U%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DXPEok9Gad1U&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXPEok9Gad1U%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/214cd7b2f5793e1ac8b163ff5d3259b3/href">https://medium.com/media/214cd7b2f5793e1ac8b163ff5d3259b3/href</a></iframe><p>It can render simple scenes using ambient occlusion or native path tracing, with support for diffuse and mirror materials. The feature set is currently very minimal. Nevertheless, to make Okapi a bit more useful, I implemented a Blender to Okapi export add-on.</p><p>I want to summarize my learnings in this post series. Since Okapi is only a toy renderer, the insights of this article can also be used to implement similar exporters (e.g. for your own renderer). I also identified a few similar projects:</p><ul><li><a href="https://github.com/stig-atle/io_scene_pbrt">GitHub - stig-atle/io_scene_pbrt: Exporter for blender that exports the scene into pbrt&#39;s ascii file format.</a></li><li><a href="https://github.com/NicNel/bpbrt4">GitHub - NicNel/bpbrt4: pbrt-v4 render engine/exporter for Blender</a></li><li><a href="https://github.com/wjakob/nori/tree/master/ext/plugin">nori/ext/plugin at master · wjakob/nori</a></li><li><a href="https://github.com/mitsuba-renderer/mitsuba-blender">GitHub - mitsuba-renderer/mitsuba-blender: Mitsuba integration add-on for Blender</a></li><li><a href="https://github.com/joeyskeys/btop">GitHub - joeyskeys/btop: btop is a blender addon for PBRT aiming for better user experience</a></li><li><a href="https://github.com/giuliojiang/pbrt-v3-blender-exporter">GitHub - giuliojiang/pbrt-v3-blender-exporter: A Blender exporter for PBRTv3 on Linux</a></li></ul><p>I used Blender version 4.4.3 for this. The Blender Python API seems to be rather stable across different versions of Blender. Nevertheless, code that worked in Blender 2.x does not necessarily work in Blender 4.x.</p><h3>First struggle: Getting started with Blender Python API</h3><ol><li>Start Blender</li><li>Go to the “Scripting” tab</li><li>“Text” -&gt; “New”</li><li>Paste print(&quot;Hello World!&quot;)</li><li>Run the script via “Run script”</li></ol><p>You will see no output if you run the above Hello World script. To see the output, &quot;Hello World!&quot; you need to start Blender via a terminal. All Python print statements are then redirected to your terminal.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sXBkOI9kBoKJyvb2izg6ww.png" /><figcaption>To see the output &quot;Hello World!&quot; you need to start Blender via a terminal. All Python print statements are then redirected to your terminal.</figcaption></figure><p>You can change the above program to</p><pre>import sys<br>print(&quot;Blender&#39;s Python version:&quot;, sys.version)</pre><p>to see the current Python version used by Blender.</p><h3>Test-driven development with Blender Python API</h3><p>Assume you have a file named test.py with the following content:</p><pre>&quot;&quot;&quot;<br>    SPDX-FileCopyrightText: Copyright 2025 Julian Amann &lt;dev@vertexwahn.de&gt;<br>    SPDX-License-Identifier: Apache-2.0<br>&quot;&quot;&quot;<br><br>import sys<br>import unittest<br><br><br>class LookAtMatrix(unittest.TestCase):<br>    def test_look_at(self):<br>        self.assertTrue(True)<br><br><br>if __name__ == &quot;__main__&quot;:<br>    print(&quot;Blender&#39;s Python version:&quot;, sys.version)<br><br>    sys.argv = [__file__] + (<br>        sys.argv[sys.argv.index(&quot;--&quot;) + 1 :] if &quot;--&quot; in sys.argv else []<br>    )<br>    unittest.main()</pre><p>You can run blender --background --python test.py -- --verbose to run the contained test. On my system, I get the output:</p><pre>Blender 4.4.3 (hash 802179c51ccc built 2025–04–29 15:12:13)<br>Blender&#39;s Python version: 3.11.11 (main, Feb 6 2025, 17:26:54) [GCC 11.2.1 20220127 (Red Hat 11.2.1–9)]<br>test_look_at (__main__.LookAtMatrix.test_look_at) … ok<br> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - <br>Ran 1 test in 0.000s<br>OK</pre><h3>To be continued</h3><p>See you in the next part (which is WIP).</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2bab7c9ff4d5" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Path Tracing Lecture Videos]]></title>
            <link>https://medium.com/@Vertexwahn/path-tracing-lecture-videos-0898d8d27936?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/0898d8d27936</guid>
            <category><![CDATA[ray-tracing]]></category>
            <category><![CDATA[computer-graphics]]></category>
            <category><![CDATA[light-transport]]></category>
            <category><![CDATA[ray-tracer]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Mon, 30 Dec 2024 18:05:31 GMT</pubDate>
            <atom:updated>2025-07-14T09:12:04.419Z</atom:updated>
            <content:encoded><![CDATA[<h3>Lectures</h3><h4>Rendering (186.101, 2021S) Computer Graphics at TU Wien</h4><p>Rendering Lecture 00 — Introduction</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F5sY_hoh_IDc%3Fstart%3D1%26feature%3Doembed%26start%3D1&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D5sY_hoh_IDc&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F5sY_hoh_IDc%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/575495ddd8408989cce084a24613367a/href">https://medium.com/media/575495ddd8408989cce084a24613367a/href</a></iframe><h3>Color Theory</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_FmOeZ5QoPk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_FmOeZ5QoPk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_FmOeZ5QoPk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/e18f1b98b4ea97a941bcf7edee1371a0/href">https://medium.com/media/e18f1b98b4ea97a941bcf7edee1371a0/href</a></iframe><h3>BRDFs</h3><h4>Normal maps</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FLTz5jxZA7_I%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DLTz5jxZA7_I&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FLTz5jxZA7_I%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/cd19bef41b799547dfb12bbb07072368/href">https://medium.com/media/cd19bef41b799547dfb12bbb07072368/href</a></iframe><h3>Render Systems</h3><h4>pbrt-v4 code walkthrough</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FAXuk7bmhZ2g%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DAXuk7bmhZ2g&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAXuk7bmhZ2g%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/472bdc2d7fa3346a2a329fb3de3ff40f/href">https://medium.com/media/472bdc2d7fa3346a2a329fb3de3ff40f/href</a></iframe><h4>Mitsuba 3</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F9Ja9buZx0Cs%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D9Ja9buZx0Cs&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F9Ja9buZx0Cs%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/cae9aa20dda22c21b260198d4dd81103/href">https://medium.com/media/cae9aa20dda22c21b260198d4dd81103/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FLCsjK6Cbv6Q%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DLCsjK6Cbv6Q&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FLCsjK6Cbv6Q%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/343627cd8dc972affdca69d4db20e7e3/href">https://medium.com/media/343627cd8dc972affdca69d4db20e7e3/href</a></iframe><h3>Conferences</h3><h4>AWE XR</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FKup0d4Te3n0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKup0d4Te3n0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FKup0d4Te3n0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/a41a93746ed9c6d660226e4e9a4d00e7/href">https://medium.com/media/a41a93746ed9c6d660226e4e9a4d00e7/href</a></iframe><h3>Motivation</h3><h4>Digital Lego City: The Alley | Episode 1</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_nQHss5NxN0%3Fstart%3D89%26feature%3Doembed%26start%3D89&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_nQHss5NxN0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_nQHss5NxN0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/d76b352d66ec7edd744f8af971d9ab0c/href">https://medium.com/media/d76b352d66ec7edd744f8af971d9ab0c/href</a></iframe><h4>Ep.1: The pioneers of computer graphics 1960–1970</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FWeJX1DV0hq0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DWeJX1DV0hq0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FWeJX1DV0hq0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/86035d6f3bcd0bb18babaf7ac24ed3c3/href">https://medium.com/media/86035d6f3bcd0bb18babaf7ac24ed3c3/href</a></iframe><h4>Ep.2: The pioneers of computer graphics — 1980s</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FhET7R8gJm_c%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DhET7R8gJm_c&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FhET7R8gJm_c%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/6b72828f515cf112ff06f973e7d48ade/href">https://medium.com/media/6b72828f515cf112ff06f973e7d48ade/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0898d8d27936" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What blocks C++ developers from using Bazel (in 2024)?]]></title>
            <link>https://medium.com/@Vertexwahn/what-blocks-c-developers-from-using-bazel-in-2024-4774fbc4d356?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/4774fbc4d356</guid>
            <category><![CDATA[build-system]]></category>
            <category><![CDATA[cpp]]></category>
            <category><![CDATA[bazel]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Mon, 30 Dec 2024 12:28:48 GMT</pubDate>
            <atom:updated>2025-05-27T13:24:29.902Z</atom:updated>
            <content:encoded><![CDATA[<p>Here is a list of things I can think of that blocks C++ developers from using Bazel:</p><ul><li>Missing IDE integration — e.g. Visual Studio (“If it does not work in my IDE/OS/Environment I will not use it”)</li><li>Missing support for C++20 modules</li><li>Missing out-of-the-box support/easy integration for needed libraries (e.g. <a href="https://opencv.org/">OpenCV</a>)</li><li>rules_cc has some quirks</li><li><a href="https://killedbygoogle.com/">Fear that Google stops to support Bazel in the future</a></li><li>Time-consuming to learn a new build system/Learning curve/Invest in new technology — most people do not have fun caring about a build system and just want to code C++</li><li>Java Runtime is not supported on the platform you want to develop</li><li>Bazel is not written in C++/Rust</li></ul><p>I tried to sort the list from the top blocker down to the most unimportant one. I have no statistics on this. It is just my gut feeling that I developed after talking with a handful of C++ developers and Bazel users. Maybe other important things are completely missing here on the list and I am pretty sure that some people would prioritize the list differently.</p><h3>Missing out-of-the-box support/easy integration for needed libraries</h3><p>Let’s talk about the “Missing out-of-the-box support/easy integration for needed libraries” blocker. At the time of this writing, the <a href="https://registry.bazel.build/">Bazel Central Registry</a> contains at least 21 C++ libraries. I think there is a kind of pattern when a new C++ package manager is born. First, there are a lot of compression libraries, such as <a href="https://www.zlib.net/">zlib</a>, <a href="https://github.com/ebiggers/libdeflate">libdeflate</a>, <a href="https://tukaani.org/xz/">xz</a>, etc. since many other things depend on compression such as file storage or file transmission. There is also a network libraries wave. After that, a wave of image libraries is introduced. <a href="http://www.libpng.org/pub/png/libpng.html">libpng</a>, <a href="https://giflib.sourceforge.net/">libgif</a>, <a href="https://libtiff.gitlab.io/libtiff/">libtiff</a>, <a href="https://github.com/webmproject/libwebp">libwebp</a>, etc. you name it. The reason here is that images are important in many applications. Maybe one day we see also an AI wave. Nevertheless, the bazelization of libraries and adding them to the <a href="https://registry.bazel.build/">Bazel Central Registry</a> seems to be a manual process that needs manual effort.</p><p>To avoid this we have also e <a href="https://github.com/bazel-contrib/rules_foreign_cc">rules_foreign_cc</a> that support <a href="https://cmake.org/">CMake</a>, configure_make, make, <a href="https://mesonbuild.com/">Meson</a>, and ninja (at the time of writing this). I do not use those rules, since my gut feeling is that this is only a temporary solution. I have no experience of how well rules_foreign_ccwork in a real setting where you want to support Windows, Linux, and macOS builds at the same time. I would assume that these rules cannot be better than the pure usage of the underlying build tools itself. I go always for an 100% pure Bazel approach not depending on an additional build system when it comes to C++ libraries.</p><p>It would be nice If we could find an automated way to take over the build from other 3rd party library managers such as <a href="https://docs.conan.io/2/index.html">Conan 2</a>. Maybe a <a href="https://strace.io/">strace</a> could be used to track down what is built and linked under the hood and from this BUILD files can be automatically derived.</p><h3>Missing IDE integration</h3><p>If you work on Windows with Visual Studio and do not want to leave this environment, then there is currently no good integration. I left Visual Studio and Windows and went over to CLion and Ubuntu. CLion comes with a great Bazel integration. I only visit Windows to build my project using Bazel to ship Windows binaries.</p><p>I wrote a very short article about how you can use Visual Studio to build your VS projects. It is doable, but there is no out-of-the-box solution that works without any friction:</p><p><a href="https://medium.com/@Vertexwahn/using-visual-studio-2022-to-build-bazel-projects-12f41ece8d10">Using Visual Studio 2022 to build Bazel projects</a></p><p>CLion has the advantage that I do not need an advanced AddIn for better syntax highlighting and refactoring. I tried CLion also in the past on Windows, but the CLion Bazel support on Windows was not good there. On macOS I made also good experiences with CLion + Bazel.</p><p>Besides this there are cool Bazel integrations/helpers to get a comfortable Bazel experience in some environments:</p><p><a href="https://github.com/MobileNativeFoundation/rules_xcodeproj">GitHub - MobileNativeFoundation/rules_xcodeproj: Bazel rules for generating Xcode projects.</a></p><h3>Missing support for C++20 modules</h3><p>There is currently (at the time of writing this) no C++20 module integration. Nevertheless, it seems that is something that is currently worked on and I assume this will land in Bazel release at some time in the future.</p><p>There are also some rules available that give a module support right now:</p><p><a href="https://github.com/eomii/rules_ll">GitHub - eomii/rules_ll: An Upstream Clang/LLVM-based toolchain for contemporary C++ and heterogeneous programming</a></p><h3>Conclusion</h3><p>There are some blockers — but I think they are not show-stoppers.</p><p>Feel free to leave a comment here or to contact me to let me know your blockers.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4774fbc4d356" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using Visual Studio 2022 to build Bazel projects]]></title>
            <link>https://medium.com/@Vertexwahn/using-visual-studio-2022-to-build-bazel-projects-12f41ece8d10?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/12f41ece8d10</guid>
            <category><![CDATA[build-system]]></category>
            <category><![CDATA[cpp]]></category>
            <category><![CDATA[bazel]]></category>
            <category><![CDATA[visual-studio]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Mon, 30 Dec 2024 12:10:13 GMT</pubDate>
            <atom:updated>2025-05-19T19:50:43.626Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Hb07keXMhpw8VRG4pvKkpA.png" /><figcaption>Get a coffee and get started with Visual Studio 2022 and Bazel</figcaption></figure><p>In this post, I will describe one easy way not to say hacky way to use <a href="https://visualstudio.microsoft.com/">Visual Studio 2022</a> with <a href="https://bazel.build/">Bazel</a>.</p><p>First create basic C++ example project. Create a folder hello_world. Add a.bazelversion file to this folder with the content 8.0.0 . Add amain.cpp file with the following content:</p><pre>#include &lt;iostream&gt;<br><br>int main() {<br>    std::cout &lt;&lt; &quot;Hello World!&quot; &lt;&lt; std::endl;<br>    return 0;<br>}</pre><p>Add an empty MODULE.bazel file. Add a BUILD.bazel file with the following content:</p><pre><br>cc_binary(<br>    name = &quot;hello_world&quot;,<br>    srcs = [&quot;main.cpp&quot;],<br>)</pre><p>The expected file structure should look like this:</p><pre>.<br>├── .bazelversion<br>├── BUILD.bazel<br>├── MODULE.bazel<br>└── main.cpp</pre><p>Start Visual Studio 2022 and select “<em>Create a new project</em>”:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1014/1*qY0eyCoXsktjgurLjcW4zw.png" /><figcaption>Create a new project</figcaption></figure><p>Select “<em>Makefile Project</em>”:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1014/1*bCigsf9mEnXpSEaJJOxGZg.png" /><figcaption>Select “Makefile Project”</figcaption></figure><p>Set “<em>Project name</em>” and “<em>Solution name</em>” to HelloWorld, set “<em>Location</em>” to the Hello World example, enable the checkmark “<em>Place solution and project in the same directory</em>”, and hit the “<em>Create</em>” button:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1014/1*aGh4ThBMDxNNFWIyJWlKeA.png" /><figcaption>Configure project</figcaption></figure><p>Set “<em>Build command line</em>” to bazel run //:hello_world. Set “Clean command line” to bazel clean --expunge. Set “Output” to hello_world.exe. Hit the “<em>Next</em>” button. Overtake the same setting for Release. Click the “<em>Finish</em>” button:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*ApjwoNjcn8uhdt-x3V4pYA.png" /><figcaption>Project configuration settings</figcaption></figure><p>This process has to be repeated for every build/test target you need.</p><p>A working example that you can download and test can be found <a href="https://github.com/Vertexwahn/BazelDemos/tree/7a8812daf131bba4bb2a842c9b3408c6d671d979/hello_world/cpp">here</a>.</p><p>In the past there was <a href="https://github.com/tmandry/lavender">Lavender</a> a open source tool to automate this process. I did work well for me in the past, but unfortunately, it is unmaintained currently. Nevertheless, I think this is one approach for a lightweight integration of Bazel into Visual Studio.</p><p><em>UPDATE</em>: Since <a href="https://blog.jetbrains.com/clion/2025/05/clion-is-now-free-for-non-commercial-use/">CLion is now free for non-commercial use</a> and CLion has made <a href="https://blog.jetbrains.com/clion/2025/04/new-features-in-bazel-plugin/">progress in Windows and Bazel support</a>, it is also a valid option to use CLion on Windows. Under the hood, still the VS2022 compiler is used to compile your code, and you will get the VS2022 error and warning messages, but you get on top of that really good Bazel support, e.g. debugging works out of the box:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VP_44hPsFoWUlyQNLirQ4g.png" /><figcaption>Debugging with Bazel on CLion using the VS2022 C++ compiler works out of the box</figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=12f41ece8d10" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Ray Tracing 101: Tiled-based vs. progressive rendering]]></title>
            <link>https://medium.com/@Vertexwahn/ray-tracing-101-tiled-based-vs-progressive-rendering-467c2efc71e6?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/467c2efc71e6</guid>
            <category><![CDATA[parallel-computing]]></category>
            <category><![CDATA[path-tracing]]></category>
            <category><![CDATA[ray-tracing]]></category>
            <category><![CDATA[rendering]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Fri, 27 Dec 2024 23:13:43 GMT</pubDate>
            <atom:updated>2024-12-27T23:13:43.660Z</atom:updated>
            <content:encoded><![CDATA[<p>I have prepared two videos that show tile-based vs. progressive rendering for ray tracing.</p><p>The first video shows tile-based rendering:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fi5h33mRHOf0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Di5h33mRHOf0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fi5h33mRHOf0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/f5749e8b98f06043d4e460e21e69a8fc/href">https://medium.com/media/f5749e8b98f06043d4e460e21e69a8fc/href</a></iframe><p>As you can see from the video different tiles of the final image are rendered in parallel. In contrast, there is progressive rendering where the whole image is rendered apparently simultaneously:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FhuHeImPCTtA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DhuHeImPCTtA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FhuHeImPCTtA%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/3a03b19564f68f4be266efe9b5a89c5e/href">https://medium.com/media/3a03b19564f68f4be266efe9b5a89c5e/href</a></iframe><p>Here is some pseudocode that shows how tile-based rendering can be implemented:</p><pre>tbb::parallel_for(0, tg.tile_count(), [&amp;](int index) {<br>    FilmTileDescription ftd = tg.tile_description(index);<br><br>    FilmTile film_tile{ftd.offset, ftd.size, channel_count, filter.get(), film-&gt;tile_bounds()}; // where to get the correct channel count?<br><br>    auto sampler(scene-&gt;sampler()-&gt;clone());<br>    int spp = sampler-&gt;sample_count();<br>    auto tile_size = ftd.size;<br><br>    for(int y = 0; y &lt; tile_size.y(); ++y) {<br>        for(int x = 0; x &lt; tile_size.x(); ++x) {<br>            for(int sample_index = 0; sample_index &lt; spp; ++sample_index) {<br>                Point2f sample_position = Point2f(ftd.offset.x() + x, ftd.offset.y() + y) + sampler-&gt;next_2d();<br>                Ray3f ray = sensor-&gt;generate_ray(sample_position);<br>                auto color = integrator-&gt;trace(scene, sampler.get(), ray, 0);<br>                film_tile.add_sample(sample_position, color.data());<br>            }<br>        }<br>    }<br><br>    mutex.lock();<br>    film-&gt;add_tile(film_tile);<br>    mutex.unlock();<br>});</pre><p>Every tile is visited in a parallel for loop. In this example, <a href="https://uxlfoundation.github.io/oneTBB/">oneTBB</a> is utilized to achieve this. For every tile, every pixel of a tile is visited and sampled multiple times according to the value of maximum samples per pixel (spp). Once the whole tile is rendered it is added to the final image (film).</p><p>In contrast, to change this to progressive rendering, the above code only has to be modified minimally. The sample per pixel loop will become the outermost loop and tiles get added to the “big picture” once a specific tile has been rendered with one sample per pixel. Instead of visiting every tile only once we visit every tile for every sample:</p><pre>for(int sample_index = 0; sample_index &lt; spp; ++sample_index) {<br>    LOG_INFO(&quot;Progressive rendering: {}/{} SPP&quot;, sample_index, spp);<br><br><br>    tbb::parallel_for(0, tg.tile_count(), [&amp;](int index) {<br>        FilmTileDescription ftd = tg.tile_description(index);<br><br>        FilmTile film_tile{ftd.offset, ftd.size, channel_count, filter.get(), film-&gt;tile_bounds()}; // where to get the correct channel count?<br><br>        auto sampler(scene-&gt;sampler()-&gt;clone());<br>        int spp = sampler-&gt;sample_count();<br>        auto tile_size = ftd.size;<br><br>        for(int y = 0; y &lt; tile_size.y(); ++y) {<br>            for(int x = 0; x &lt; tile_size.x(); ++x) {          <br>                Point2f sample_position = Point2f(ftd.offset.x() + x, ftd.offset.y() + y) + sampler-&gt;next_2d();<br>                Ray3f ray = sensor-&gt;generate_ray(sample_position);<br>                auto color = integrator-&gt;trace(scene, sampler.get(), ray, 0);<br>                film_tile.add_sample(sample_position, color.data());<br>            }<br>        }<br><br>        mutex.lock();<br>        film-&gt;add_tile(film_tile);<br>        mutex.unlock();<br>    });<br>}</pre><p>The progressive and the tile-based variants render individual tiles. The difference is when a tile gets added to the final image. For the progressive variant, this is done after every pixel has been sampled one time and for the tile-based approach, it is done when all pixels have been sampled the predefined maximum number of samples per pixel.</p><p>The reason why the progressive variant is still tile-based is that the above approach assumes that samples are splatted across different pixels using some filter kernel. This leads to the problem that our tiles have also an overlap near the tile edges with other tiles and need to be accumulated in a proper and synchronized way.</p><h3>Tile to final image synchronization</h3><p>To get rid of the tile to final image synchronization we could think about a pattern or order in which parallel execution can be performed without having to do synchronization. Let us consider a 4x4 grid:</p><pre>+---+---+---+---+<br>| A | B | C | D |<br>+---+---+---+---+<br>| E | F | G | H |<br>+---+---+---+---+<br>| I | J | K | L |<br>+---+---+---+---+<br>| M | N | O | P |<br>+---+---+---+---+</pre><p>Assuming that the border that affects other pixels from a tile does not influence more than one neighborhood tile, it is clear that we could easily render tile A and tile P in parallel without caring about synchronization issues. Also, the sets A, M, D, and P should work in parallel without synchronization. If we assume that the border of influence of a tile is not wider than one-half of a tile we could render A, C, I, K in parallel. We could probably come up with a formula like:</p><pre>// Assumes integer division<br>int next_look_free_tile_x = (index*2) % tiles_per_width;</pre><p>Once A, C, I, K are rendered we can offset the whole pattern by one tile in the x-direction, then in y-direction, and then in x- and y-direction. Assuming that our final image can be split into many tiles we can find enough work for all parallel processing units.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=467c2efc71e6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A manifest for Boost libraries in the Bazel Central Registry]]></title>
            <link>https://medium.com/@Vertexwahn/best-practices-for-boost-libraries-in-the-bazel-central-registry-bcr-cc289c9ad12e?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/cc289c9ad12e</guid>
            <category><![CDATA[boost]]></category>
            <category><![CDATA[bazel]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Sat, 09 Nov 2024 09:43:15 GMT</pubDate>
            <atom:updated>2025-07-16T18:41:23.233Z</atom:updated>
            <content:encoded><![CDATA[<p>NOTE: I am currently reworking this — this is WIP</p><h3>Prerequisites</h3><p>I assume you have basic knowledge about the following things:</p><ul><li><a href="https://www.boost.org/">Boost C++ Libraries</a></li><li><a href="https://bazel.build/">Bazel</a></li><li><a href="https://bazel.build/external/registry">Bazel registries</a></li><li><a href="https://registry.bazel.build/">Bazel Central Registry</a> (BCR)</li></ul><h3>Motivation</h3><p>Easy use of Boost C++ libraries using Bazel.</p><h3>Design Decisions</h3><h4><strong>Division into Bazel modules</strong></h4><p>Instead of offering one “big” Boost dependency to be able to use Boost C++ libraries (e.g. bazel_dep(name = &quot;boost&quot;, version = &quot;1.87.0&quot;) , it was decided to create a Bazel module for each individual Boost library, e.g. bazel_dep(name = &quot;boost.array&quot;, version = &quot;1.87.0&quot;) , bazel_dep(name = &quot;boost.assert&quot;, version = &quot;1.87.0&quot;) , bazel_dep(name = &quot;bind&quot;, version = &quot;1.87.0&quot;) , etc. Nevertheless, it is still intended to have bazel_dep(name = &quot;boost&quot;, version = &quot;1.87.0&quot;) which will refer to all individual modules. The advantage of this setting is that if you only depend on a single Boost library you do not need to fetch everything of Boost (in the case of Boost 1.88.0 more than 150 MB). Moreover, Boost itself also manages each single library in an own GitHub repo, e.g. <a href="https://github.com/boostorg/pfr">https://github.com/boostorg/pfr</a>. This paves the way to get native Bazel support into Boost in the future.</p><p><strong>How to avoid mixing different boost versions?</strong></p><p>To avoid that different boost versions are mixed, e.g. boost.algorithm@1.87.0 is using boost.array@1.86 a module named boost.pin_versionwas introduced. boost.pin_version@1.88.0.bcr.1 looks like this:</p><pre># boost.pin_version is not a real Boost module.<br># Its whole purpose is to ensure that Boost modules of one specific Boost version get not mixed with another one (e.g. 1.88.0 with 1.83.0)<br><br>module(<br>    name = &quot;boost.pin_version&quot;,<br>    version = &quot;1.88.0.bcr.1&quot;,<br>    bazel_compatibility = [&quot;&gt;=7.6.0&quot;],<br>    compatibility_level = 108800,  # Can remain constant as nodeps prevent version skew<br>)<br><br>bazel_dep(name = &quot;boost.accumulators&quot;, version = &quot;1.88.0.bcr.1&quot;, repo_name = None)<br><br># List of Boost modules is based on https://pdimov.github.io/boostdep-report/boost-1.88.0/module-overview.html.<br># Dependency reports for other versions can be found at https://pdimov.github.io/boostdep-report/.<br>[bazel_dep(name = boost_module, version = &quot;1.88.0.bcr.1&quot;, repo_name = None) for boost_module in [<br>    &quot;boost.accumulators&quot;,<br>    &quot;boost.algorithm&quot;,<br>    &quot;boost.align&quot;,<br>    &quot;boost.any&quot;,<br>    &quot;boost.array&quot;,<br>    &quot;boost.asio&quot;,<br>    [[ ... all other Boost libraries of this specifc version..  ]]<br>    &quot;boost.wave&quot;,<br>    &quot;boost.winapi&quot;,<br>    &quot;boost.xpressive&quot;,<br>    &quot;boost.yap&quot;,<br>]]</pre><p>boost.pin_version@1.88.0.bcr.1 points to all other Boost Modules that can be combined with each other. It is assumed that a version 1.88.0.bcr.1 of boost.pin_version only references to Boost libraries of version 1.88.0.bcr.1 . The good thing about boost.pin_version is that it can refer to modules that do not currently exist. We follow here strictly the policy that boost.pin_version refers only to Boost libraries of the same version as boost.pin_version , i.e. boost.pin_version@1.88.0.bcr.1 only refers to boost.accumulator@1.88.0.bcr.1 , boost.algorithm@1.88.0.bcr.1 etc. boost.pin_version@1.88.0.bcr.1 can not refer to a Boost library whose version is different from 1.88.0.bcr.1 . At the same time, every Boost library module references the boost.pin_versionas a dependency. This means there is a mutual dependency between pin_version, and its dependencies. That has the benefit, that version get not mixed</p><p>OLD:</p><h3><strong>Naming conventions for branches and pull requests</strong></h3><p>Use as a naming convention for your branches and pull requests the pattern library@1.2.3 e.g. boost.algorithm@1.83.0 . If you want to fix an issue in an already published module, use .bcr.1 , .bcr.2 , etc. as a postfix to the version number, e.g. boost.algorithm@1.83.0.bcr.1 .</p><h3>Comments on pull request</h3><p>There is a CI check in the BCR that makes sure that no unstable URLs are used to reference to the sourcecode of a module. Unfortunately, Boost libraries published on GitHub use currently unstable URLs. To avoid ignore the warning from the BCR CI add to your GitHub pull request the comment @bazel-io skip_check unstable_url .</p><h3>MODULE.bazel: Use the right compatibility level</h3><p>The compatiblity_level should correspond to the Boost version. For example, Bazel modules for Boost 1.83.0 libraries should have the compatibility level 108300.</p><pre>module(<br>    name = &quot;boost.algorithm&quot;,<br>    version = &quot;1.83.0&quot;,<br>    bazel_compatibility = [&quot;&gt;=7.2.1&quot;],<br>    compatibility_level = 108300,<br>)</pre><h3>presubmit.yml: Test on many platforms</h3><p>Test on at least these platforms:</p><pre>  platform:<br>    - debian10<br>    - debian11<br>    - macos<br>    - macos_arm64<br>    - ubuntu2004<br>    - ubuntu2204<br>    - ubuntu2404<br>    - windows</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cc289c9ad12e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Image file formats: How to store high dynamic range image data?]]></title>
            <link>https://medium.com/@Vertexwahn/image-file-formats-52bbc8a523b4?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/52bbc8a523b4</guid>
            <category><![CDATA[hdr]]></category>
            <category><![CDATA[exr]]></category>
            <category><![CDATA[file-format]]></category>
            <category><![CDATA[ldr]]></category>
            <category><![CDATA[computer-graphics]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Tue, 11 Jun 2024 22:19:12 GMT</pubDate>
            <atom:updated>2024-06-21T07:32:33.991Z</atom:updated>
            <content:encoded><![CDATA[<h3>LDR and HDR</h3><p>One difference between the various existing image formats is their support for color depth. This can be either designed for low dynamic range (LDR) or high dynamic range (HDR).</p><p><a href="http://www.libpng.org/pub/png/libpng.html">PNG</a> (Portable Network Graphics), for example, is an LDR image format. 8 bits, i.e. 256 different color values, can be saved per color channel. PNGs have a color channel for red, green, blue, and optionally an alpha channel. This gives a total of 3 * 8 bits = 24 bits or 4 * 8 bits = 32 bits per color value. PNGs also offer support to store grayscale images. If you want to find out more about the feature set of PNG file format consider <a href="http://www.libpng.org/pub/png/libpng.html">libpng</a> which offers a reference implementation of the PNG format.</p><p>In contrast, there are also HDR formats. <a href="https://openexr.com/en/latest/#openexr">EXR</a> is a representative of such a format. It can use up to 32 bits per color channel. This means that for an image with a red, green, blue, and alpha channel, 4 * 32 bits = 128 bits are available for each color value.</p><p>HDR formats allow a much greater contrast range (difference between brightest and darkest colors) than LDR formats. This makes them a prominent image format when it comes to storage of the output result of the rendering process.</p><h3>PFM</h3><p>One of the simplest HDR formats is the <a href="https://www.pauldebevec.com/Research/HDR/PFM/">Portable Float Map</a> (PFM). The file header is written as a plain text file, i.e. you can view and read it with a simple text editor. To indicate that the file is a PFM file the first line of the file starts with PF or Pf. PF indicates that the image has three color channels (red, green, blue), whereas Pf indicates that the image is monochrome, i.e. it only has a single color channel. Then a Unix-style carriage return follows (hex code 0x0a). The next line defines the width and height of the image followed again by a Unix-style carriage return. The third line defines the byte order. The byte order can be either little-endian (which is indicated by -1.0) or big-endian (which is indicated by 1.0). Again there is a Unix-style carriage return. After this, a series of 4-byte IEEE 754 single floating point values follows. The values are sorted from left to right and from bottom to top to</p><p>Here is an example header:</p><pre>PF<br>200 200<br>-1.0</pre><p>You can easily extract from a PFM file the first three lines using the head command:</p><pre>head -3 test.pfm</pre><p>You can also have a look at the hex representation of a PF file via:</p><pre>head -n 3 /home/vertexwahn/Desktop/test.pfm | hd</pre><p>Here is an example output of the command:</p><pre>00000000  50 46 0a 32 30 30 20 32  30 30 0a 2d 31 2e 30 0a  |PF.200 200.-1.0.|<br>00000010</pre><p>From the hex dump, you can see the Unix-style carriage return values (0x0a)</p><h3>Example</h3><p>Here is an example C++ program that writes a PFM image with the size of 200×200 pixels.</p><pre>#include &lt;iostream&gt;<br>#include &lt;fstream&gt;<br>#include &lt;memory&gt;<br><br>namespace shrew {<br>    struct Color3f {<br>        explicit Color3f() : values{0.f, 0.f, 0.f} {}<br>        explicit Color3f(float red, float green, float blue) :<br>            values{red, green, blue} {}<br>        float values[3];<br>    };<br><br>    class Image3f {<br>    public:<br>        Image3f(int width, int height) : width_{width}, height_{height},<br>                                         data_{new Color3f[width*height]} {}<br>        [[nodiscard]] int width() const { return width_; }<br>        [[nodiscard]] int height() const { return height_; }<br>        void set_pixel(int x, int y, const Color3f&amp; color) {<br>            data_[x+y*width_] = color;<br>        }<br>        [[nodiscard]] Color3f get_pixel(int x, int y) const {<br>            return data_[x+y*width_];<br>        }<br>        [[nodiscard]] const Color3f* data() const { return data_.get(); }<br>        [[nodiscard]] size_t byte_size() const {<br>            return width_ * height_ * sizeof(float) * 3;<br>        }<br>    private:<br>        int width_, height_; // size dimensions of the image<br>        std::unique_ptr&lt;Color3f[]&gt; data_; // pixel data is sorted left to right, top to bottom<br>    };<br><br>    Image3f flip_horizontally(const Image3f&amp; image) {<br>        Image3f flipped{image.width(), image.height()};<br>        for (int y = 0; y &lt; image.height(); ++y) {<br>            for (int x = 0; x &lt; image.width(); ++x) {<br>                auto color = image.get_pixel(x,image.height()-y-1);<br>                flipped.set_pixel(x, y, color);<br>            }<br>        }<br>        return flipped;<br>    }<br>    <br>    void store_pfm(const Image3f&amp; image, std::string_view filename) {<br>        std::ofstream file(filename.data(), std::ios::binary);<br>        file &lt;&lt; &quot;PF&quot; &lt;&lt; &quot;\n&quot; &lt;&lt; image.width() &lt;&lt; &quot; &quot; &lt;&lt; image.height() &lt;&lt; &quot;\n&quot;<br>             &lt;&lt; &quot;-1.0&quot; &lt;&lt; &quot;\n&quot;;<br><br>        Image3f flipped = flip_horizontally(image);<br><br>        file.write(reinterpret_cast&lt;const char*&gt;(flipped.data()),<br>                   static_cast&lt;std::streamsize&gt;(flipped.byte_size()));<br>    }<br>}<br><br>using namespace shrew;<br><br>int main() {<br>    Image3f image{200, 200};<br>    for(int y = 0; y &lt; image.height(); ++y) {<br>        for(int x = 0; x &lt; image.width(); ++x) {<br>            image.set_pixel(x, y, Color3f{1.f, 1.f, 0.f});<br>        }<br>    }<br><br>    for(int y = 0; y &lt; image.height()/2; ++y) {<br>        for(int x = 0; x &lt; image.width()/2; ++x) {<br>            image.set_pixel(x, y, Color3f{1.f, 0.f, 0.f});<br>        }<br>    }<br><br>    for(int y = 0; y &lt; image.height()/2; ++y) {<br>        for(int x = image.width()/2; x &lt; image.width(); ++x) {<br>            image.set_pixel(x, y, Color3f{0.f, 1.f, 0.f});<br>        }<br>    }<br><br>    store_pfm(image, &quot;test.pfm&quot;);<br>}</pre><p>Note that in the above example Image3f stores images from left to right and bottom to top. This means that the top right pixel is addressed by image.get_pixel(0,0) and the pixel at the right bottom via image.get_pixel(image.width()-1,image.height()-1). PFM images are stored differently. They have a bottom-to-top order. Therefore, the image is horizontally flipped in the store_pfm method.</p><p>The output of the generated PFM image looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jZDFEhq-itgYWFFWtITV4A.png" /></figure><p>Note that <a href="https://github.com/Tom94/tev">tev</a> is used here as an image viewer.</p><p>A C++ function to store PFM float data that is independent of the Image3f and Color3f classes could also look like this:</p><pre>void store_pfm(int width, int height, const float* data, <br>               std::string_view filename) {<br>    std::ofstream file(filename.data(), std::ios::binary);<br>    file &lt;&lt; &quot;PF&quot; &lt;&lt; &quot;\n&quot; &lt;&lt; width &lt;&lt; &quot; &quot; &lt;&lt; height &lt;&lt; &quot;\n&quot; &lt;&lt; &quot;-1.0&quot; &lt;&lt; &quot;\n&quot;;<br>    std::streamsize byte_size = width * height * sizeof(float) * 3;<br>    file.write(reinterpret_cast&lt;const char*&gt;(data),<br>               static_cast&lt;std::streamsize&gt;(byte_size));<br>}</pre><p>Even if C++ is not your favorite programming language it should not be hard to port this code to a different language. Also, <a href="https://chat.openai.com/">ChatGPT</a> should be able to support you in the endeavor.</p><p>The above implementation has some pitfalls. For instance, there are no out-of-bound checks. What happens if a user of the Image3f class wants to access a pixel with a negative pixel position? The implementation still needs some tweaks but the main goal was here to give you an understanding of how a basic HDR file format works. Another thing that I assumed is that the program is always running on a little-endian machine. If you run the program on a machine with Big Endian layout you will be in trouble.</p><h3>EXR</h3><p>EXR is a more advanced file format to store HDR images. One library that implements reading and writing this format is <a href="https://openexr.com/en/latest/">OpenEXR</a>.</p><h4>OpenEXR usage example</h4><p>Here is an example of how OpenEXR can be used to store an .exr file:</p><pre>#include &quot;OpenEXR/ImfChannelList.h&quot;<br>#include &quot;OpenEXR/ImfOutputFile.h&quot;<br>#include &quot;OpenEXR/ImfRgbaFile.h&quot;<br>#include &quot;OpenEXR/ImfStringAttribute.h&quot;<br><br>#include &lt;memory&gt;<br>#include &lt;string_view&gt;<br><br>using namespace Imf;<br>using namespace Imath;<br><br>namespace shrew {<br>    struct Color3f {<br>        explicit Color3f() : values{0.f, 0.f, 0.f} {}<br>        explicit Color3f(float red, float green, float blue) :<br>            values{red, green, blue} {}<br>        float values[3];<br>    };<br><br>    class Image3f {<br>    public:<br>        Image3f(int width, int height) : width_{width}, height_{height},<br>                                         data_{new Color3f[width*height]} {}<br>        [[nodiscard]] int width() const { return width_; }<br>        [[nodiscard]] int height() const { return height_; }<br>        void set_pixel(int x, int y, const Color3f&amp; color) {<br>            data_[x+y*width_] = color;<br>        }<br>        [[nodiscard]] Color3f get_pixel(int x, int y) const {<br>            return data_[x+y*width_];<br>        }<br>        [[nodiscard]] Color3f* data() { return data_.get(); }<br>        [[nodiscard]] const Color3f* data() const { return data_.get(); }<br>        [[nodiscard]] size_t byte_size() const {<br>            return width_ * height_ * sizeof(float) * 3;<br>        }<br>    private:<br>        int width_, height_; // size dimensions of the image<br>        std::unique_ptr&lt;Color3f[]&gt; data_; // pixel data is sorted left to right, top to bottom<br>    };<br><br>    void store_exr(Image3f &amp;image, std::string_view filename) {<br>        Header header(image.width(), image.height());<br>        header.insert(&quot;comments&quot;, Imf::StringAttribute(&quot;Generated by my awesome App&quot;));<br><br>        ChannelList &amp;channels = header.channels();<br>        channels.insert(&quot;R&quot;, Imf::Channel(Imf::FLOAT));<br>        channels.insert(&quot;G&quot;, Imf::Channel(Imf::FLOAT));<br>        channels.insert(&quot;B&quot;, Imf::Channel(Imf::FLOAT));<br><br>        FrameBuffer frame_buffer;<br>        size_t comp_stride = sizeof(float);<br>        size_t pixel_stride = 3 * comp_stride;<br>        size_t row_stride = pixel_stride * image.width();<br><br>        char *data = reinterpret_cast&lt;char *&gt;(image.data());<br>        frame_buffer.insert(&quot;R&quot;, Imf::Slice(Imf::FLOAT, data, pixel_stride, row_stride));<br>        data += comp_stride;<br>        frame_buffer.insert(&quot;G&quot;, Imf::Slice(Imf::FLOAT, data, pixel_stride, row_stride));<br>        data += comp_stride;<br>        frame_buffer.insert(&quot;B&quot;, Imf::Slice(Imf::FLOAT, data, pixel_stride, row_stride));<br><br>        OutputFile file(filename.data(), header);<br>        file.setFrameBuffer(frame_buffer);<br>        file.writePixels(image.height());<br>    }<br>}<br><br>using namespace shrew;<br><br>int main() {<br>    Image3f image{200, 200};<br>    <br>    for(int y = 0; y &lt; image.height(); ++y) {<br>        for(int x = 0; x &lt; image.width(); ++x) {<br>            image.set_pixel(x, y, Color3f{1.f, 1.f, 0.f});<br>        }<br>    }<br><br>    for(int y = 0; y &lt; image.height()/2; ++y) {<br>        for(int x = 0; x &lt; image.width()/2; ++x) {<br>            image.set_pixel(x, y, Color3f{1.f, 0.f, 0.f});<br>        }<br>    }<br><br>    for(int y = 0; y &lt; image.height()/2; ++y) {<br>        for(int x = image.width()/2; x &lt; image.width(); ++x) {<br>            image.set_pixel(x, y, Color3f{0.f, 1.f, 0.f});<br>        }<br>    }<br><br>    store_exr(image, &quot;test.exr&quot;);<br>}</pre><p>To be able to compile this program you need the OpenEXR library and link it to your application. The <a href="https://bazel.build/">Bazel build system</a> can fetch OpenEXR, build it and link it to your application. Assuming that the above source code is stored in a file named main.cpp you can build and run this program via Bazel by creating the following files:</p><pre>mkdir openexr_example<br>cd openexr_example<br>echo &#39;7.2.0&#39; &gt; .bazelversion<br>echo &#39;build --enable_platform_specific_config<br>build:macos --cxxopt=-std=c++2b<br>build:linux --cxxopt=-std=c++20<br>build:windows --cxxopt=/std:c++20<br>&#39; &gt; .bazelrc<br>echo &#39;cc_binary(<br>    name = &quot;Demo&quot;,<br>    srcs = [&quot;main.cpp&quot;],<br>    deps = [&quot;@openexr//:OpenEXR&quot;],<br>)&#39; &gt; BUILD.bazel<br>echo &#39;bazel_dep(name = &quot;openexr&quot;, version = &quot;3.2.4&quot;)&#39; &gt; MODULE.bazel</pre><p>The above bash script creates the following files:</p><p>.bazelversion:</p><pre>7.2.0</pre><p>.bazelrc:</p><pre>build --enable_platform_specific_config<br>build:macos --cxxopt=-std=c++2b<br>build:linux --cxxopt=-std=c++20<br>build:windows --cxxopt=/std:c++20</pre><p>BUILD.bazel:</p><pre>cc_binary(<br>    name = &quot;Demo&quot;,<br>    srcs = [&quot;main.cpp&quot;],<br>    deps = [&quot;@openexr//:OpenEXR&quot;],<br>)</pre><p>MODULE.bazel:</p><pre>bazel_dep(<br>    name = &quot;openexr&quot;, <br>    version = &quot;3.2.4&quot;<br>)</pre><p>Now you can run the application via:</p><pre>bazel run //:demo</pre><h3>Modern low dynamic range formats</h3><p>A prominent option for low dynamic range (LDR) images is the PNG format. There are many different options for storing LDR images. One of the newer options is <a href="https://developers.google.com/speed/webp">WebP</a>. WebP provides similar to PNG a lossless image compression. Here is an example of a PNG image:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*J4rEz0jM4T3ZeJGzimzXqQ.png" /></figure><p>For the above example image the WebP variant is about 0.5 MB smaller:</p><p>Format | Size<br>PNG | 1.7 MB<br>WebP | 1.2 MB</p><p>If you serve a lot of images via a web service per day and have to pay for network traffic and CPU usage image compression can get very important cost-wise. When it comes to compression often there is a trade of between compression speed, decompression speed, and file size.</p><p>Here is an example of how image data can be stored as a WebP file using the webp library:</p><pre>bool store_webp(const char *filename, const Image4b &amp;image) {<br>    int stride = image.width() * static_cast&lt;int&gt;(sizeof(Color4b));<br>    uint8_t* out = nullptr;<br>    auto encoded_size = WebPEncodeLosslessRGBA(image.data(), image.width(), image.height(), stride, &amp;out);<br>    FILE* file = fopen(filename, &quot;wb&quot;);<br>    fwrite(out, 1, encoded_size, file);<br>    fclose(file);<br><br>    return true;<br>}</pre><p>The full source code for this can be found <a href="https://github.com/Vertexwahn/FlatlandRT/blob/dca59477359c3554c2356052a138345458644490/devertexwahn/imaging/io/io_webp.cpp#L15C1-L26C2">here</a>. A good library that supports many image formats is <a href="https://github.com/AcademySoftwareFoundation/OpenImageIO">OpenImageIO</a>.</p><h3>Spectral images</h3><p>Spectral render engines produce spectral images. Instead of storing this data in an RGB image format one approach could be to store this data in spectral image data format. The paper <a href="https://jcgt.org/published/0010/03/01/paper.pdf">An OpenEXR Layout for Spectral Images</a> gives an overview of the current state of affairs. A spectral viewer can be found <a href="https://mrf-devteam.gitlab.io/spectral-viewer/about/">here</a>.</p><h3>To filter or not to filter? That is the question.</h3><p>Usually during rendering some filtering process takes place to avoid effects such as aliasing. The problem with the procedure is that after the rendering result is stored as an image you can not change the filtering process. For instance, if you used as a reconstruction filter a tent filter with a radius of 2 pixels and now want to change to 3 pixels you have to rerender the whole scene. The filtering takes only a small fraction of the whole rendering time. If we had stored all samples with the corresponding values we could maybe reuse them when we just change the reconstruction filter. I was always wondering if it makes sense to store individual samples in an image format. Of course, this can get expensive if we render for instance with 8192 samples per pixel. That would mean an image that is filtered 1 MB in total size will blow up to 8 GB. Maybe if the rendering time is really big (&gt; 12h) and we spend a lot time tweaking reconstruction filtering such a format would make sense.</p><h3>References</h3><ul><li><a href="https://www.pauldebevec.com/Research/HDR/PFM/">PFM Portable FloatMap Image Format</a></li><li><a href="https://paulbourke.net/dataformats/pbmhdr/">Unofficial PBM format for HDR images, PFM (Portable Float Map)</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52bbc8a523b4" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Manage Terraform state in a AWS]]></title>
            <link>https://medium.com/@Vertexwahn/manage-terraform-state-in-a-aws-dce66788ed1?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/dce66788ed1</guid>
            <category><![CDATA[terraform]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[infrastructure-as-code]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Mon, 02 Oct 2023 09:00:34 GMT</pubDate>
            <atom:updated>2023-10-08T16:58:21.957Z</atom:updated>
            <content:encoded><![CDATA[<h3>Drawbacks of storing Terraform state local or in version control system</h3><p>Terraform manages its state in a terraform.tfstate file. This works well as long as only a single developer works on a single machine. If you want to execute Terraform from a different machine Terraform is not aware of which resources are already in place in which state they are, since the other machine does not have the state file. One option would be to check in the terraform.tfstate file to a version control system such as Git. But this can lead to other problems. Two different users might concurrently try to modify the state. Since there is no locking mechanism, this might lead to problems. Besides this, there are also other issues such as that the terraform.tfstate file can leak passwords. Instead of storing the Terraform state on your local machine or a Git repository another option is to store it on AWS.</p><h3>Migrate from local state to AWS-managed state</h3><h4>Local management of state</h4><p>Assuming our setup manages a DynamoDB table on AWS. Our setup could look like this:</p><pre>terraform {<br>  required_providers {<br>    aws = {<br>      source  = &quot;hashicorp/aws&quot;<br>      version = &quot;4.7.0&quot;<br>    }<br>  }<br>}<br><br>provider &quot;aws&quot; {<br>  region = &quot;eu-west-1&quot;<br>}<br><br>resource &quot;aws_dynamodb_table&quot; &quot;favorite_songs_dynamodb_table&quot; {<br>  name           = &quot;FavoriteSongs&quot;<br>  read_capacity  = 10<br>  write_capacity = 10<br>  hash_key       = &quot;Title&quot;<br><br>  attribute {<br>    name = &quot;Title&quot;<br>    type = &quot;S&quot;<br>  }<br><br>  attribute {<br>    name = &quot;Artist&quot;<br>    type = &quot;S&quot;<br>  }<br><br>  attribute {<br>    name = &quot;PlayTimeInSeconds&quot;<br>    type = &quot;N&quot;<br>  }<br><br>  global_secondary_index {<br>    name               = &quot;ArtistPlayTimeInSecondsIndex&quot;<br>    hash_key           = &quot;Artist&quot;<br>    range_key          = &quot;PlayTimeInSeconds&quot;<br>    write_capacity     = 10<br>    read_capacity      = 10<br>    projection_type    = &quot;INCLUDE&quot;<br>    non_key_attributes = [&quot;Title&quot;]<br>  }<br>}</pre><p>Please note the choice of the hash_key and global_secondary_index here is completely random and comes due to my lack of deeper knowledge of DynamoDB. Consider it as some kind of “dummy” for a resource that we want to manage. If you are interested in more details in DynamoDB <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html">consider the official documentation</a>. Now let&#39;s focus on Terraform state management again.</p><p>Let&#39;s create a local state. Execute:</p><pre>terraform init<br>terraform plan -out=plan</pre><p>There should be now a terraform.tfstate file. By executingterraform apply &quot;plan&quot; you should find a new table named FavoriteSongs under tables in the DynamoDB section in the AWS Management Console.</p><h4>Creating an S3 Bucket for storing Terraform state</h4><p>For storing the Terraform state we will use a S3 Bucket. This can be created by:</p><pre>resource &quot;aws_s3_bucket&quot; &quot;terraform_state&quot; {<br>  bucket = &quot;terraform-favorite-songs-state&quot;<br><br>  lifecycle {<br>    prevent_destroy = true # prevent deleting this S3 bucket by accident via terraform delete<br>  }<br>}</pre><p>Please note by setting the prevent_destroy attribute we prevent that the S3 bucket can be deleted without manual intervention.</p><p>Furthermore, we enable versioning for the S3 bucket:</p><pre>resource &quot;aws_s3_bucket_versioning&quot; &quot;versioning&quot; {<br>  bucket = aws_s3_bucket.terraform_state.id<br>  versioning_configuration {<br>    status = &quot;Enabled&quot;<br>  }<br>}</pre><p>And encryption:</p><pre>resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;encryption&quot; {<br>  bucket = aws_s3_bucket.terraform_state.id<br><br>  rule {<br>    apply_server_side_encryption_by_default {<br>      sse_algorithm = &quot;AES256&quot;<br>    }<br>  }<br>}</pre><p>To prevent users from modifying the Terraform state in parallel we need support for locking. We will use a DynamoDB table for this:</p><pre>resource &quot;aws_dynamodb_table&quot; &quot;terraform_locks&quot; {<br>  name         = &quot;terraform_favorite_songs_terraform_state_locks&quot;<br>  billing_mode = &quot;PAY_PER_REQUEST&quot;<br>  hash_key     = &quot;LockID&quot;<br><br>  attribute {<br>    name = &quot;LockID&quot;<br>    type = &quot;S&quot;<br>  }<br>}</pre><p>As a last step, we need to tell Terraform to use the S3 bucket for the storage of the Terraform state file.</p><pre>terraform {<br>  backend &quot;s3&quot; {<br>    bucket         = &quot;terraform-favorite-songs-state&quot;<br>    key            = &quot;global/s3/terraform.tfstate&quot;<br>    region         = &quot;eu-west-1&quot;<br>    dynamodb_table = &quot;terraform_favorite_songs_terraform_state_locks&quot;<br>    encrypt        = true<br>  }<br>}</pre><p>When now doing a terraform apply we get a now a notification that we have first to do a terraform init . The reason for this is that we are going to move now our local state to an AWS-managed state. After terraform init a terraform apply can be performed. One more problem that can occur now is that Terraform complains now about the missing S3 bucket or DynamoDB referenced in the S3 backend state. To circumvent this you first have to comment out the backend section, then run terraform again, and then comment it in again.</p><p>After this the terraform.tfstate file on the local disk should be empty and you should find a Terraform state file in the AWS S3 bucket.</p><h3>Making it more complicated</h3><p>In the case, we have two AWS accounts — one for testing and another one for production and try to apply the same Terraform configuration to both accounts we will run into some trouble. The reason for this is the S3 storage. The name of the S3 storage needs to be unique — you cannot use the same S3 Bucket name in two different accounts.</p><p>Therefore we introduce a file name testing.tfbackend . The file contains all variables specific to the backend configuration variables that are specific for the test environment and related:</p><pre>bucket         = &quot;terraform-favorite-songs-state&quot;<br>key            = &quot;global/s3/terraform.tfstate&quot;<br>region         = &quot;eu-west-1&quot;<br>dynamodb_table = &quot;terraform_favorite_songs_terraform_state_locks&quot;</pre><p>We change the backend block this way:</p><pre>terraform {<br>  backend &quot;s3&quot; {<br>    encrypt        = true<br>  }<br>}</pre><p>Now we run terraform init -backend-config=testing.tfbackend . Via the -----backend-config we can provide different attribute values using a file. This way we can have one file for our test and another one for your production environment to avoid name duplication of S3 buckets.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dce66788ed1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How I would change Medium]]></title>
            <link>https://medium.com/@Vertexwahn/what-i-would-change-about-medium-1d1b94790608?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/1d1b94790608</guid>
            <category><![CDATA[medium]]></category>
            <category><![CDATA[mathematics]]></category>
            <category><![CDATA[user-experience]]></category>
            <category><![CDATA[feature-request]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Thu, 29 Jun 2023 22:56:11 GMT</pubDate>
            <atom:updated>2025-07-10T19:24:54.550Z</atom:updated>
            <content:encoded><![CDATA[<h3>Add support for Mathematical Formulars</h3><p>In my <a href="https://vertexwahn.de/post/">personal blog</a> which is based on <a href="https://gohugo.io/">Hugo</a> I did some minor modifications to support <a href="https://www.latex-project.org/">TeX</a>. This enables me to easily embed math formulas in my blog posts. Here is an example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/831/1*SQikmc6uPYh93w1nJoQMKg.png" /><figcaption>Example of embedding TeX in my blog post. Math formulas can be embedded directly in flowing text or stand by their own. It is even possible to number equations (not shown in this image).</figcaption></figure><p>The syntax for embedding such formulas looks like this:</p><pre>The ${\chi}^2$ test can be used to determine if an observed sample distribution matches an expected sample distribution. <br><br>For instance, given a sample generator that should generate uniform distributed 2D samples ($s_n$) within the domain $[0,1) \times x[0,1)$ the ${\chi}^2$ test can be used to check if the generated samples are really uniform distributed.<br>As a null hypothesis, we can formulate that the sample generator generates uniform distributed samples.<br><br>&lt;div&gt;$$ H_o: \forall s_n: p(s_n) = \text{const}<br>$$&lt;/div&gt;<br><br>The sample generator can be tested by computing the difference between the expected sample frequency ($s_e$) and the observed sample frequency ($s_o$) according to (assuming $s_e \ge 5$ ):</pre><p>In my blog, <a href="https://www.mathjax.org/">MathJax</a> is used to render TeX formulas. It would be very nice to have support for TeX in Medium. To integrate MathJax in a personal web page it takes less than 3 lines of code. When searching for solutions on how to integrate math equations in Medium you find many not satisfying — only have baked solutions. One approach is to convert your equation into a raster image and embed it as an image on Medium. This way you lose the ability to do quick changes/iterations and have to repeat this cycle once you find an error in your equation. Hopefully, you have the original equation stored as a backup — otherwise, you have to retype it. Besides this, you need another tool to type it. I assume that storing a TeX equation takes less memory and traffic than transmitting images of equations. Furthermore, it would attract more people to Medium. What do you fear Medium people? Are you afraid of mathematical formulas? Better user experience, less traffic, and less data to store. Do it!</p><p><a href="https://upmath.me/">Upmath</a> shows a very nice way how to integrate TeX support in Markdown. Meanwhile, also <a href="https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/writing-mathematical-expressions">GitHub supports a similar style to add TeX formulas</a> to Markdown files. <a href="https://dev.to/p/editor_guide#katex-embed">DEV</a> uses KaTeX.</p><h4>Embeds are not a good solution for mathematical formulas</h4><p>Embeds are Medium’s way of embedding content from other parties. There are +300 providers.</p><p><a href="https://help.medium.com/hc/en-us/articles/214981378-Using-embeds">Using embeds</a></p><p>For instance, there is a provider for Twitter that allows you to embed Tweets in your Medium post, such as this one:</p><h3>Vertexwahn on Twitter: &quot;FlatlandRT is a 2D ray tracer visualization tool. Just released version 1.1.0: https://t.co/9T4xH6lTDD pic.twitter.com/2TsRk4XX7Z / Twitter&quot;</h3><p>FlatlandRT is a 2D ray tracer visualization tool. Just released version 1.1.0: https://t.co/9T4xH6lTDD pic.twitter.com/2TsRk4XX7Z</p><p>All you have to do is to copy the tweet URL, paste it to your Medium editor, and then it gets automatically converted into an embedded tweet. On the list of supported providers, you can also find <a href="https://texblocks.com/">https://texblocks.com/</a>. Which gives you TeX support for medium. Unfortunately, it does not work for complex formulas, e.g.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/365/1*torRgina5TA_dZ-pxXK92A.png" /></figure><p>becomes:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Ftexblocks.com%2Fembed%2F%5Cint_a%5Eb+f%28x%29+dx+%5Capprox+%5Cfrac%7B1%7D%7BN%7D+%5Csum_%7Bi%253D0%7D%5E%7BN-1%7D%7B%5Cfrac%7Bf%28x_i%29%7D%7Bp%28x_i%29%7D%7D&amp;display_name=texblocks&amp;url=https%3A%2F%2Ftexblocks.com%2Fembed%2F%255Cint_a%255Eb%2520f%28x%29%2520dx%2520%255Capprox%2520%255Cfrac%257B1%257D%257BN%257D%2520%255Csum_%257Bi%253D0%257D%255E%257BN-1%257D%257B%255Cfrac%257Bf%28x_i%29%257D%257Bp%28x_i%29%257D%257D&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=texblocks" width="500" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/cf437ed72d7329a315b9446664596ffa/href">https://medium.com/media/cf437ed72d7329a315b9446664596ffa/href</a></iframe><p>Besides the TeX rendering is not working there is also some annoying empty space around the formula (or you do not see anything at all in some browsers).</p><p>Also, other people struggled with this issue:</p><p><a href="https://medium.com/notonlymaths/using-latex-on-medium-6200fc8d0783">Using LaTeX on Medium</a></p><h3>Add support for tables</h3><p>There are no tables in Medium. WTF? Again, you have to use workarounds:</p><ul><li><a href="https://medium.com/@mesirii/5-tips-for-embedding-tables-in-your-medium-posts-8722f3fc5bf5">5 Tips for Embedding Tables in Your Medium Posts</a></li><li><a href="https://levelup.gitconnected.com/3-tips-to-sharing-beautiful-tables-on-medium-post-25dab18670e">3 Tips to Sharing Beautiful Tables on Medium Post</a></li></ul><p>The most promising one seems to be this one:</p><p><a href="https://blog.sheetsu.com/show-table-on-medium-ba0be0c16c59">Show table on Medium</a></p><p>The same arguments as for “Mathematical Formulas” can be used here. Embedding tables via a syntax, such as</p><pre>| Month    | Savings  |<br>| -------- | -------- |<br>| January  | $2250    |<br>| February | $180     |<br>| March    | $6420    |</pre><p>or providing a table editor, would lead to faster iterations in writing (since no external hacks or tools for tables have to be used). It could save memory and traffic since some users will simply screenshot a table and embedded as an image.</p><h3>Tear down the paywall</h3><p>Assume you have on average 100 readers per month for your written articles. Furthermore, let us assume that about 1/10 of those readers have a Medium membership. Putting your article behind the paywall would mean that you will immediately lose 90% of your readers. Unfortunately, the only way to earn money with your own writings on Medium seems to be putting them behind the paywall. Why is it not possible to earn money with an article that is not behind the paywall? It is clear that you can not get paid for 100 readers, but at least for the 10% that have a paid subscription. Let the content creator decide how content should be shared with the community. The current model leads to the effect that 90% of the readers get frustrated when facing the paywall. It is likely that in the future non-members will simply ignore “Medium” search results, and go for other blog services. If a non-member accesses Medium very frequently maybe also advertisements make sense to compensate for the fall of the paywall. Maybe a model such as non-members get always advertisements, logged-in non-paing members have 3 free advertisements, etc. would make sense here. A non-paying site visitor is better than no visitors at all.</p><h3>Free membership for writers with many views/reads</h3><p>Writers who have more than a certain number of views/reads on their articles should be offered a free membership.</p><h3>Last words</h3><p>If I need to describe the design of Medium: It&#39;s black and white with a focus on text and simplicity. Adding the previously described features may change the DNA of Medium. Users would get a more complicated user interface for editing stories, look &amp; feel would change since you could play around with tables, formulas, etc. Maybe this is the chance for another platform that is more focused on technical writing. There are several competitors, such as <a href="https://dev.to/">DEV</a>.</p><p>For Hugo, there is a <a href="https://jamstackthemes.dev/demo/theme/hugo-mediumish-theme/">Medium Theme</a>. Maybe I will come up with my own solution in the future. Hugo also supports <a href="https://gohugo.io/content-management/diagrams/">diagrams</a>. Unfortunately, this takes additional effort on my side, and will lose this way the great Medium editor and read statistics. Please let me know what you think about my proposed additions and if you have found any alternatives.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1d1b94790608" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Is putting all project dependencies inside the project’s repository a good practice?]]></title>
            <link>https://medium.com/@Vertexwahn/is-putting-all-project-dependencies-inside-projects-repository-good-practice-2b275f4fc3ce?source=rss-6a0c100abfe0------2</link>
            <guid isPermaLink="false">https://medium.com/p/2b275f4fc3ce</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[dependency-management]]></category>
            <category><![CDATA[dependencies]]></category>
            <dc:creator><![CDATA[Vertexwahn]]></dc:creator>
            <pubDate>Sun, 16 Apr 2023 01:03:12 GMT</pubDate>
            <atom:updated>2025-01-18T14:14:55.152Z</atom:updated>
            <content:encoded><![CDATA[<h3>Storing all external dependencies in third_party folder</h3><p>On StackOverflow, I stumbled over the question “<a href="https://stackoverflow.com/questions/32154192/is-putting-all-project-dependencies-inside-projects-repository-good-practice">Is putting all project dependencies inside project’s repository good practice?</a>”.</p><p>Actually, I have one public Git repository named <a href="https://github.com/Vertexwahn/FlatlandRT">FlatlandRT</a> that tries to follow this idea. The file structure of this repository looks like this (generated via tree -L 2):</p><pre>.<br>├── azure-pipelines.yml<br>├── devertexwahn<br>│   ├── ci<br>│   ├── core<br>│   ├── coverage.sh<br>│   ├── flatland<br>│   ├── imaging<br>│   ├── math<br>│   ├── okapi<br>│   └── WORKSPACE.bazel<br>├── docs<br>│   └── images<br>├── LICENSE<br>├── README.md<br>└── third_party<br>    ├── abseil-cpp<br>    ├── bazel-skylib<br>    ├── bazel-toolchain-0.8<br>    ├── Catch2<br>    ├── eigen-3.4.0<br>    ├── fmt<br>    ├── gflags<br>    ├── glog<br>    ├── googletest<br>    ├── hypothesis<br>    ├── Imath-3.1.7<br>    ├── libjpeg-turbo-2.1.4<br>    ├── libpng-1.6.39<br>    ├── nasm-2.14.02<br>    ├── openexr<br>    ├── pcg-cpp<br>    ├── pugixml-1.13<br>    ├── rules_boost<br>    ├── rules_pkg-0.9.0<br>    ├── software-bill-of-materials.md<br>    ├── xtensor<br>    ├── xtl<br>    └── zlib-1.2.13</pre><p>The idea of the folder third_party is that it contains all external dependencies. Every third-party dependency (e.g. library, tool, etc.) should be added in source code form to the third_party folder.</p><p>There are some good reasons for this:</p><ul><li><strong>Legal issues</strong>: Most open-source projects require you to reproduce the original license and copyright notes. By simply copying everything you downloaded and distributing this again you are most likely already fulfilling all legal constraints for the redistribution of an external dependency in source code form. Another scenario: It can happen that for some legal reasons some license notes have to be checked for certain files — this is without the original source not possible. It can also happen that a third-party dependency changes its license model — in this case, it is good to have a backup/prove that the source was distributed under a certain license in the past.</li><li><strong>Reproducibility</strong>: It can happen that an external dependency is deleted and can not be restored anymore. Especially when using smaller Open Source Projects that only have a single maintainer.</li><li><strong>Offline build</strong>: Allows a build of the software without an internet connection. If you have all dependencies in your repo you do not need to redistribute them in other ways. E.g. fetching them with a packages manager, downloading them, etc.</li><li><strong>Easy modification</strong>: Allows easy modification of the source code. If the code of all third-party libs is in third_party folder and all third-party libs are built from source it is very easy to do quick changes and fixes and test them with your codebase. Refactorings are not blocked by a complicated release and versioning process.</li></ul><h3>Managing third-party libraries</h3><h4>Naming conventions</h4><p>I came up with some naming conventions for folder names in the third_party folder:</p><p>If a specific release version of a library is used this version becomes part of the directory name. For instance, for zlib version 1.2.13 the directory name becomes zlib-1.2.13. This also allows to have different release versions of a library at the same time in the third-party folder.</p><p>In the case a specific commit hash was used (e.g. from a Git repository) the corresponding commit hash is added as a suffix to the folder name. For instance, when using the commit4c8fe9fabbd149dff42f854c07fabbe286f93a8 of the repository <a href="https://github.com/bazelbuild/rules_license">rules_license</a> the corresponding folder name becomes rules_license-84c8fe9fabbd149dff42f854c07fabbe286f93a8. This way it becomes clear how the repository was materialized.</p><p>Unfortunately, adding the commit as a suffix does not work pretty well with Git and GitHub. For instance when there is an update of <a href="https://github.com/google/googletest">GoogleTest</a> all files that have not changed are shown as moved. That pollutes the diff view on Git and GitHub and makes it difficult to find the “real” changes. Therefore, there is a third convention. Use only the library name, e.g. googletest and track the current commit hash in the file software-bill-of-materials.md. This file tracks for every folder the corresponding commit hash:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Oqz-oEAooC0nNao420QCdQ.png" /><figcaption>Markdown that tracks commit hashes</figcaption></figure><h4>Update of libraries</h4><p>I wrote a shell script that auto-updates my libraries:</p><pre>#!/usr/bin/env bash<br><br>#<br>#   SPDX-FileCopyrightText: 2022-2023 Julian Amann &lt;dev@vertexwahn.de&gt;<br>#   SPDX-License-Identifier: Apache-2.0<br>#<br><br>set -euxo pipefail<br><br>git_repos_with_bom_update=(<br>    &quot;https://github.com/abseil/abseil-cpp&quot;<br>    &quot;https://github.com/AcademySoftwareFoundation/openexr&quot;<br>    &quot;https://github.com/bazelbuild/bazel-skylib&quot;<br>    &quot;https://github.com/bazelbuild/bazelisk&quot;<br>    &quot;https://github.com/catchorg/Catch2&quot;<br>    &quot;https://github.com/gflags/gflags&quot;<br>    &quot;https://github.com/google/benchmark&quot;<br>    &quot;https://github.com/google/glog&quot;<br>    &quot;https://github.com/google/googletest&quot;<br>    &quot;https://github.com/imneme/pcg-cpp&quot;<br>    &quot;https://github.com/jbeder/yaml-cpp&quot;<br>    &quot;https://github.com/martis42/depend_on_what_you_use&quot;<br>    &quot;https://github.com/nelhage/rules_boost&quot;<br>    &quot;https://github.com/nlohmann/json&quot;<br>    &quot;https://github.com/oneapi-src/oneTBB&quot;<br>    &quot;https://github.com/pytorch/cpuinfo&quot;<br>    #&quot;https://github.com/grailbio/bazel-toolchain&quot; # Issues on macOS<br>)<br><br>for repo in &quot;${git_repos_with_bom_update[@]}&quot;<br>do<br>    bazel run //modernizer:update_git_archive_v2 -- &quot;${THIRD_PARTY_DIR}&quot; &quot;${repo}&quot;<br>done</pre><p>The script that fetches a library and updates the commit hash looks like this:</p><pre>#!/usr/bin/env bash<br><br>#<br>#   SPDX-FileCopyrightText: 2022-2023 Julian Amann &lt;dev@vertexwahn.de&gt;<br>#   SPDX-License-Identifier: Apache-2.0<br>#<br><br>set -euxo pipefail<br><br># Provide third_party folder and GitHub repo url as command line arguments<br>if [ &quot;$#&quot; -ne 2 ]; then<br>    echo &quot;Wrong number of parameters detected&quot;<br>    echo &quot;Usage: $0 &lt;third_party_dir&gt; &lt;git_hub_url&gt;&quot;<br>    exit 1<br>fi<br><br># First command line argument<br>third_party_dir=$1 # First command line argument e.g. ~/dev/Piper/third_party<br>git_hub_url=$2 # E.g. &quot;https://github.com/imneme/pcg-cpp&quot;<br><br># Get github repo name from git_hub_url<br>git_hub_repo_name=$(echo &quot;$git_hub_url&quot; | sed -e &#39;s/.*\///g&#39;)<br><br># Create a temporary directory<br>tmpdir=$(mktemp -d)<br><br># Get rid of temporary files when script exits<br>trap &quot;rm -rf $tmpdir&quot; EXIT<br><br># Clone repo to tmp folder<br>cd &quot;$tmpdir&quot;<br>git clone &quot;$git_hub_url&quot;<br><br># Determine hash<br>cd &quot;$git_hub_repo_name&quot;<br>commit_hash=$(git rev-parse HEAD)<br><br># Determine hash without cloning<br>#git ls-remote &quot;$git_hub_url&quot; | \<br>#   grep refs/heads/master | cut -f 1<br><br># Delete last version of the third party dependency<br>cd &quot;$third_party_dir&quot;<br>old_dir=$(ls | grep &quot;$git_hub_repo_name&quot;)<br>old_hash=$(ls | grep &quot;$git_hub_repo_name&quot; | sed &#39;s/$git_hub_repo_name-//&#39;)<br>echo &quot;Old hash is&quot; &quot;$old_hash&quot;<br>rm -rf &quot;$old_dir&quot;<br><br># Download an extract latest third party dependency to third_party folder<br>cd &quot;$tmpdir&quot;<br>curl -L &quot;$git_hub_url&quot;&#39;/archive/&#39;$commit_hash&#39;.zip&#39; --output &#39;$git_hub_repo_name-&#39;$commit_hash&#39;.zip&#39;<br>sha256=$(sha256sum &#39;$git_hub_repo_name-&#39;$commit_hash&#39;.zip&#39; | cut -d &#39; &#39; -f 1)<br>echo &quot;$sha256&quot;<br>unzip -o &#39;$git_hub_repo_name-&#39;$commit_hash&#39;.zip&#39; -d &quot;$third_party_dir&quot;<br><br># Rename folder to exclude hash<br>mv &quot;$third_party_dir&quot;/&quot;$git_hub_repo_name&quot;-&quot;$commit_hash&quot; &quot;$third_party_dir&quot;/&quot;$git_hub_repo_name&quot;<br><br># Change commit hash<br>cd &quot;$third_party_dir&quot; || exit -1<br>grep -n &quot;$git_hub_repo_name&quot; &quot;software-bill-of-materials.md&quot;<br><br>lineNum=&quot;$(grep -n &quot;$git_hub_repo_name&quot; software-bill-of-materials.md | head -n 1 | cut -d: -f1)&quot;<br>if [[ &quot;$OSTYPE&quot; == &quot;linux-gnu&quot;* ]]; then<br>    sed -i $lineNum&#39;s/&#39;$git_hub_repo_name&#39;.*/&#39;$git_hub_repo_name&#39;-&#39;$commit_hash&#39;/&#39; software-bill-of-materials.md<br>else<br>    sed -i &#39;&#39; $lineNum&#39;s/&#39;$git_hub_repo_name&#39;.*/&#39;$git_hub_repo_name&#39;-&#39;$commit_hash&#39;/&#39; software-bill-of-materials.md<br>fi</pre><p>There is also a nightly CI pipeline that runs this script:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AKh4snFU08JAsHyX9jOv2w.png" /><figcaption>Nightly CI pipeline to run the script and to auto-update libraries</figcaption></figure><p>The GitHub workflow looks something like this:</p><pre>#<br>#   SPDX-FileCopyrightText: Copyright 2022-2023 Julian Amann &lt;dev@vertexwahn.de&gt;<br>#   SPDX-License-Identifier: Apache-2.0<br>#<br><br>name: Update third-party dependencies<br>on:<br>  schedule:<br>    - cron: &quot;0 0 * * *&quot;<br>  workflow_dispatch:<br>  #push: {}<br>    <br>jobs:<br>  build:<br>    runs-on: ubuntu-22.04<br><br>    steps:<br>      - name: Show OS version<br>        run: |<br>          lsb_release -a<br>          <br>      - name: Show Bazel version<br>        run: |<br>          bazelisk version<br><br>      - name: Show GCC version<br>        run: |<br>          gcc --version<br><br>     <br>      - uses: actions/checkout@v3<br><br>             <br>      - name: Run third-party dependencies update script<br>        run: |<br>          ./update_third_party_dependencies.sh &quot;/home/runner/work/Piper/Piper&quot;<br><br>      - name: Add Linux specific Bazel ignore file<br>        run: |<br>          cd devertexwahn<br>          cp &quot;.bazelignore.linux&quot; &quot;.bazelignore&quot;<br><br>     <br>      - name: Build and test using GCC11 on Ubuntu 22.04<br>        run: |<br>          cd devertexwahn<br>          bazelisk build --config=gcc11 --config=buildbuddy_remote_cache -- //...<br>          bazelisk test --config=gcc11 --config=buildbuddy_remote_cache -- //...<br>          bazelisk build --config=gcc11 --compilation_mode=dbg --config=buildbuddy_remote_cache -- //...<br>          bazelisk test --config=gcc11 --compilation_mode=dbg --config=buildbuddy_remote_cache -- //...<br>          bazelisk build --config=gcc11 --compilation_mode=opt --config=buildbuddy_remote_cache -- //...<br>          bazelisk test --config=gcc11 --compilation_mode=opt --config=buildbuddy_remote_cache -- //...<br><br>      - name: Build and test using Clang14 on Ubuntu 22.04<br>        run: |<br>          cd devertexwahn<br>          # Compile using Clang14 (fastbuild, debug and optimized)<br>          bazelisk build --config=clang14  --config=buildbuddy_remote_cache -- //...<br>          bazelisk build --config=clang14  --config=buildbuddy_remote_cache --compilation_mode=dbg -- //... <br>          bazelisk test --config=clang14  --config=buildbuddy_remote_cache --compilation_mode=dbg -- //...<br>          bazelisk build --config=clang14  --config=buildbuddy_remote_cache --compilation_mode=opt -- //...<br>          bazelisk test --config=clang14  --config=buildbuddy_remote_cache --compilation_mode=opt -- //...<br>        <br>      - name: Remove Linux specific Bazel ignore file<br>        run: |<br>          cd devertexwahn<br>          rm &quot;.bazelignore&quot;<br><br>      - uses: stefanzweifel/git-auto-commit-action@v4<br>        with:<br>          commit_message: Auto update of dependencies</pre><p>There are also other tools such as <a href="https://github.com/google/copybara">Copybara</a> that could be used to move the changes from other repositories including the whole change history into my FlatlandRT mono repository. I decided against having all changes including all commit messages from all external libraries since this would pollute too much my usage of Git and GitHub. In detail: Most commits would be from changes in external libraries and not my project. To be clear here if I would have the proper tooling to handle this better I would like to have every single commit. The practical and manageable solution for me at this point (given the current tools Git and GitHub) is to have so-to-say auto-update snapshots every evening that summarize all changes.</p><h4>Handling of large artifacts</h4><p>In some cases copying a third-party dependency is not practical. For instance, when considering <a href="https://www.qt.io/download-open-source">Qt</a> or <a href="https://www.boost.org/">Boost</a>. Boost`s source code is compressed over 100 MB already. Since GitHub is used to store the source code, it is a bit unhandy to add more than 100 MB to the repository only for a single dependency. For Qt, the situation is even worse. If I could I would also add Boost and Qt to the third-party folder, but size limits from Git and GitHub prevent me from doing this.</p><p>In the end, handling large artifacts is a limitation of Git and GitHub. I really see here the chance for a new version control system and hosting platform to solve this issue that enables this use case. A similar problem arises when you have big data assets.</p><p>“Plain” Git is not a good choice when a repository contains many big data assets (such as textures, 3D models, virtual environments, trained neural nets, audio files, etc.) for different reasons, such as:</p><ul><li>Fetch/clone speed goes down</li><li>Users can probably not checkout the repo because of limited disk space (e.g. when repo is &gt; 1 TB)</li></ul><p>Nevertheless, it is desirable to be able to manage data and source code together in one repository.</p><p>There is Git LFS, but it has some drawbacks, e.g.:</p><ul><li>referenced files can get deleted by accident since they are not part of the “real” repo</li><li>misc git LFS issues, when switching between branches</li><li>Users sometimes forget to also check in Git LFS files</li><li>No good migration path if reference files move later to another location (old commits will then point to invalid file locations)</li></ul><p>Recently, Git <a href="https://git-scm.com/docs/scalar">scalar</a> was added to Git. Maybe this is a first approach to address some problems of this.</p><p>Anyway, I was also thinking about a light-weight solution to this problem, such as adding a simple script that downloads the “big” stuff, e.g.:</p><pre>echo &quot;Fetch boost...&quot;<br>curl &quot;https://boostorg.jfrog.io/artifactory/main/release/1.82.0/source/boost_1_82_0.tar.gz&quot; --output boost_1_80_0.tar.gz<br>tar -xf boost_1_82_0.tar.gz</pre><p>But this again brings problems such as that files outside of the repo can get changed by accident and then do not fit anymore to the corresponding commit hash. In the end, atomic commits would be desirable to pin everything together. In my current setting Boost is fetched from the internet as a tar.gz file and hash-checked via <a href="https://bazel.build/">Bazel</a>.</p><h3>Summary</h3><p>I really like to have a mono repository with all external dependencies included. Since there are some tooling limitations and I do not have time to come up with my own version control system one has to live with some compromises here. Nevertheless, I think a version control system that gives atomic commits and can easily handle an insane amount of data would be beneficial in many areas and for many projects (e.g. machine learning). Maybe the solution would be something like GitHub, but with the difference that there is only ONE BIG mono repo for all users with ONE CI for everyone.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2b275f4fc3ce" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>