<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Mat Ryer on Medium]]></title>
        <description><![CDATA[Stories by Mat Ryer on Medium]]></description>
        <link>https://medium.com/@matryer?source=rss-f25c357b8e4c------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 12 Apr 2026 23:03:18 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@matryer/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[PODCAST: Hiring and job interviews]]></title>
            <link>https://medium.com/@matryer/podcast-hiring-and-job-interviews-b2498965477d?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2498965477d</guid>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[interviews-and-insights]]></category>
            <category><![CDATA[interview]]></category>
            <category><![CDATA[podcast]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Tue, 14 May 2019 21:38:44 GMT</pubDate>
            <atom:updated>2019-05-14T21:38:44.413Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zBK73reb_dWdxhBo_2fu1g.png" /></figure><p>I chat with Ashley McNamara, Johnny Boursiquot, and Carmen Andoh about the process of getting hired, hiring, and job interviews. If people are the most important part of a team, how do we pick who we work with? What’s the process like? How can it better?</p><p>Put this in your ears…</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fchangelog.com%2Fgotime%2F82%2Fembed&amp;url=https%3A%2F%2Fchangelog.com%2Fgotime%2F82&amp;image=https%3A%2F%2Fcdn.changelog.com%2Fuploads%2Fcovers%2Fgo-time-original.png%3Fv%3D63688781289&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=changelog" width="500" height="220" frameborder="0" scrolling="no"><a href="https://medium.com/media/94124863187fa119e8697236a54d167d/href">https://medium.com/media/94124863187fa119e8697236a54d167d/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2498965477d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an engine with the Veritone Engine Toolkit]]></title>
            <link>https://medium.com/machine-box/building-an-engine-with-the-veritone-engine-toolkit-1c00f964f2fe?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/1c00f964f2fe</guid>
            <category><![CDATA[veritone]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Mon, 08 Apr 2019 12:57:29 GMT</pubDate>
            <atom:updated>2019-04-08T12:57:29.765Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TU_vJxKR1jKhr9sCPlxpcA.png" /><figcaption>Learn more about <a href="https://www.veritone.com/aiware/">aiWARE on the Veritone website</a></figcaption></figure><p><a href="https://www.veritone.com/aiware/">Veritone’s aiWARE platform</a> lets customers run AI solutions at a very large scale, and it does it by spinning up as many instances of the cognitive engines required for a given use case to meet the demand and handle the load. When they’re done, they shut down to free resources.</p><p>Engine developers don’t need to worry about this process (the platform handles all that for us), they just need to make sure the engine can receive work, process the files, and send the output.</p><p>In order to put an engine onto the platform, you must assemble a Docker container and build an integration with the Veritone aiWARE platform and we recommend using the Veritone Engine Toolkit to do so.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LuXgFXFhwcAdkbGEcg-aCA.png" /><figcaption>Project homepage: <a href="https://machinebox.io/veritone/engine-toolkit">https://machinebox.io/veritone/engine-toolkit</a></figcaption></figure><h3>The Veritone Engine Toolkit</h3><p>The <a href="https://machinebox.io/veritone/engine-toolkit">Veritone Engine Toolkit</a> does the heavy lifting of integrating with the Veritone aiWARE platform, allowing you to focus on what makes your engine valuable.</p><p>The Engine Toolkit works by interfacing on your behalf with the aiWARE platform (connecting to the messaging queue to receive work and send results, pulling data via GraphQL, and more) and makes simple HTTP requests to your engine to process the files.</p><p>You just need to listen on an HTTP endpoint (by writing some kind of simple web server), and provide a webhook which the Engine Toolkit will call when the time is right.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qZTkq1LLLEGwd1IORciDwA.png" /><figcaption>The <a href="https://machinebox.io/veritone/engine-toolkit#process-webhook">Process webhook</a> is where your engine does its work</figcaption></figure><p>The <a href="https://machinebox.io/veritone/engine-toolkit#process-webhook">Process webhook</a> is called for each chunk (a chunk might be an image from a frame, an audio clip, or some other kind of file — depending on what your engine can do), and the response your webhook returns will be delivered by the toolkit to the platform.</p><p>Using whichever framework or mechanism makes sense for your language of choice, you can extract the chunk file from the HTTP request, along with a series of other data which is <a href="https://machinebox.io/veritone/engine-toolkit#process-webhook">outlined in the documentation</a> and previewed on the left.</p><p>The webhook will return a blob of data that describes the output for the work that was done during the processing.</p><p>For example, if our engine is finding faces, we might output some JSON that looks like this:</p><pre>{<br>	&quot;series&quot;: [{<br>		&quot;startTimeMs&quot;: 1000,<br>		&quot;stopTimeMs&quot;: 2000,<br>		&quot;object&quot;: {<br>			&quot;type&quot;: &quot;face&quot;,<br>			&quot;confidence&quot;: 0.95,<br>			&quot;boundingPoly&quot;: [<br>				{&quot;x&quot;:0.3,&quot;y&quot;:0.1}, <br>                                {&quot;x&quot;:0.5,&quot;y&quot;:0.1},<br>				{&quot;x&quot;:0.5,&quot;y&quot;:0.9},<br>                                {&quot;x&quot;:0.3,&quot;y&quot;:0.9}<br>			]<br>		}<br>	}, {<br>		&quot;startTimeMs&quot;: 5000,<br>		&quot;stopTimeMs&quot;: 6000,<br>		&quot;object&quot;: {<br>			&quot;type&quot;: &quot;face&quot;,<br>			&quot;confidence&quot;: 0.95,<br>			&quot;boundingPoly&quot;: [<br>				{&quot;x&quot;:0,&quot;y&quot;:0}, {&quot;x&quot;:1,&quot;y&quot;:0},<br>				{&quot;x&quot;:1,&quot;y&quot;:1}, {&quot;x&quot;:0,&quot;y&quot;:1}<br>			]<br>		}<br>	}]<br>}</pre><p>The Veritone Engine Toolkit will post this data via the message queue to the platform on your behalf, and moments later it will be available via the APIs, as well as the applications that run in the platform.</p><p>Here is a complete implementation of a sample <a href="https://machinebox.io/veritone/engine-toolkit#process-webhook">Process webhook</a> handler written in Go:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/967fa8e355c738d6862835f31bc0a492/href">https://medium.com/media/967fa8e355c738d6862835f31bc0a492/href</a></iframe><ul><li>Using just the standard library, notice how the HTTP request provides everything we need (FormValue, FormFile, etc)</li><li>The handler outputs data into the Vendor object, but real engines should return data conforming to the <a href="https://docs.veritone.com/#/engines/engine_standards/">Veritone standard output in the documentation</a></li></ul><h3>Testing your engine</h3><p>Before you deploy your engine to production, you’ll want to test it to ensure that it’s behaving as expected.</p><p>It is recommended that you write code that makes HTTP requests to the endpoints you have implemented. This approach allows you to run the code before deploying your engine to production, or even as part of a continuous integration environment.</p><blockquote>You don’t need to include every field that the real requests might contain, instead just include the data that you are using in your engine.</blockquote><p>Sometimes it’s handy to get your hands on the ins and outs of an API, and you can do this using the Engine Toolkit Test Console which is embedded in the engine binary.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/501/1*eBnBa92GqOllrW4-GEnUdg.png" /><figcaption>The Engine Toolkit Test Console allows you to make real requests to your engine’s webhooks — a handy way to make sure they’re doing what you want them to</figcaption></figure><p>To activate the Engine Toolkit Test Console, we need to run our engine with some special parameters:</p><pre>docker build -f Dockerfile -t your-engine .<br>docker run -e &quot;VERITONE_TESTMODE=true&quot; -p 9090:9090 -p 8080:8080 --name your-engine -t your-engine</pre><p>The VERITONE_TESTMODE environment variable activates the Console, and since it is served on port 9090, we have to expose that with the -p 9090:9090 flag.</p><p>Once this is running, open your browser at <a href="http://localhost:9090/">http://localhost:9090/</a> to access the Console.</p><p>The Engine Toolkit Test Console automatically checks the engine’s webhooks and other required items (like the Manifest file) so you can be sure your engine is going to work when pushed to production.</p><h4>What about more advanced cases?</h4><p>If your engine needs to perform some more advanced interactions with the platform, you can still use the GraphQL API from within your Veritone Engine Toolkit powered engine.</p><p>The Veritone aiWARE platform documentation often mentions the Payload JSON, which contains details about the task you’re engine is being asked to do. It also contains a token, which is used to authenticate with the platform and it is this token that allows you to access the GraphQL services.</p><p>The payload comes into the Process webhook via the payload form variable. It is a JSON string, so you must unmarshal it before you can get at the data inside it.</p><p>In Go, the code to do this might look like this:</p><pre>var payload struct {<br>    Token string</pre><pre>    // list the other fields you care about<br>    // from the payload here<br>}<br>err := json.Unmarshal([]byte(f.FormValue(&quot;payload&quot;), &amp;payload)<br>if err != nil {<br>    return errors.Wrap(err, &quot;unmarshal payload&quot;)<br>}<br>// TODO: use payload.Token</pre><p>Whichever language or tech stack you’re using, there will certainly be a library or package available that will parse HTTP requests for you, so consuming webhook requests and providing responses shouldn’t be hard.</p><blockquote>If you need a little help or inspiration, you can check out the <a href="https://machinebox.io/veritone/engine-toolkit#sample-engines">Sample engines</a> from the Veritone Engine Toolkit GitHub repo.</blockquote><p>Once you’ve extracted what you need from the payload, you can write your own code to interact with the APIs to fulfil the purpose of the engine.</p><h3>More information?</h3><p>If you’d like to learn more about these technologies, please check out the following links:</p><ul><li><a href="https://machinebox.io/">Veritone Machine Box Suite</a> — Machine learning capabilities inside Docker containers</li><li><a href="https://www.veritone.com/\">Veritone’s aiWARE platform</a> — Enterprise grade AI at scale</li><li><a href="https://machinebox.io/veritone/engine-toolkit">Veritone Engine Toolkit</a> open source project for integrating cognitive engines with the aiWARE platform</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1c00f964f2fe" width="1" height="1" alt=""><hr><p><a href="https://medium.com/machine-box/building-an-engine-with-the-veritone-engine-toolkit-1c00f964f2fe">Building an engine with the Veritone Engine Toolkit</a> was originally published in <a href="https://medium.com/machine-box">Machine Box</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Broken windows theory: why code quality and simplistic design are non-negotiable]]></title>
            <link>https://medium.com/@matryer/broken-windows-theory-why-code-quality-and-simplistic-design-are-non-negotiable-e37f8ce23dab?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/e37f8ce23dab</guid>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[software-design]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[architecture]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Wed, 27 Mar 2019 17:01:03 GMT</pubDate>
            <atom:updated>2019-03-29T07:05:47.593Z</atom:updated>
            <content:encoded><![CDATA[<p>A friend once told me about an experiment where someone left a new car in the street — it remained there untouched for a week.</p><p>They repeated the experiment again, but this time they made a deliberate crack in the windscreen of the car, and within a few days, the car was completely burnt out.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JCTtb5wusTdFt6Q0neINSg.jpeg" /><figcaption>Because of a single crack, the car ended up getting destroyed. This is what will happen to your software if you’re not careful.</figcaption></figure><blockquote>The <strong>broken windows theory</strong> is a criminological theory that visible signs of crime, anti-social behavior, and civil disorder create an urban environment that encourages further crime and disorder, including serious crimes. The theory suggests that policing methods that target minor crimes such as vandalism, public drinking, and fare evasion help to create an atmosphere of order and lawfulness, thereby preventing more serious crimes. — <a href="https://en.wikipedia.org/wiki/Broken_windows_theory">Wikipedia</a></blockquote><p>Each person who damaged the car didn’t really change the state of it very much. The window was already cracked, so somebody just added another crack; no big deal. Compare this to making the first crack in an otherwise pristine car — the difference is significant.</p><p>Later, somebody else comes along and does some other small amount of damage to the already-damaged car, and so on until it is a wreck.</p><blockquote>If we think about the Broken windows theory when it comes to code and software design, it’s clear that there is great value in the cost of maintaining quality.</blockquote><p>If you work on a project that has flaky tests, then you’re more likely to add more flaky tests. If there is a hacky design, you’re more likely to hack more in. If there is a package called utils, you’re more likely to stick more utilities in there. If one of your HTTP handlers uses a global variable for some state, why wouldn’t yours? If anything, we’re taught that our projects should be self-similar and consistent — so maybe we’re even doing the <em>right</em> thing?</p><p>Each person who contributes to this mess isn’t really changing the system much, but they are headed towards a fire.</p><p>Like in the criminology theory, policing the small stuff and taking the time to keep the quality of your projects high ought to be one the highest priorities of the team. This can include refactoring, developer documentation, good and clear Makefiles, simple package layout, good quality test code, and more. None of these things should be overdone, they should be as simple as they can be (but no simpler, right Einstein?).</p><h4>Isn’t perfection the enemy of progress?</h4><p>No project is going to be perfect, and there will always be things we’re not happy with as a team but that “just aren’t worth fixing.” It’s important to be honest (and positive) about this fact. Complaining might be good therapy for developers, but a team deciding to leave something the way it is for the sake of progress is a respectable position to take; and should be a positive experience and an excuse for learning.</p><p>Limiting scope is a great way to ensure teams have the time they need to get the important stuff right and this needs to be understood across the entire company (not just the dev team). Maybe our software doesn’t do everything, but what it does do, it does well.</p><h4>Conclusion</h4><p>Remember the Broken window theory when you next mark a ticket as ‘done’. Have you just made the first crack in an otherwise pristine window? If so, get some help from the team and take a little more time to refactor it until you’re happy that it meets your standards.</p><p>When reviewing code, police the small stuff. Pay attention to the implied decisions and make sure you’re happy with the decisions (overt or otherwise) that are being taken along the way.</p><h4>References</h4><ul><li>Broken windows theory on Wikipedia <a href="https://en.wikipedia.org/wiki/Broken_windows_theory">https://en.wikipedia.org/wiki/Broken_windows_theory</a></li><li><a href="https://blog.codinghorror.com/the-broken-window-theory/">https://blog.codinghorror.com/the-broken-window-theory/</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e37f8ce23dab" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Machine Box is growing up: Why we joined Veritone]]></title>
            <link>https://medium.com/machine-box/machine-box-is-growing-up-why-we-joined-veritone-38eedfee7b66?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/38eedfee7b66</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[startup]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Fri, 21 Sep 2018 19:17:07 GMT</pubDate>
            <atom:updated>2018-10-01T15:50:46.786Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Machine Box has joined <a href="https://www.veritone.com/?utm_source=machinebox&amp;utm_medium=machinebox&amp;utm_campaign=machinebox&amp;utm_term=machinebox&amp;utm_content=machinebox">Veritone</a>.</blockquote><p>We started Machine Box because we wanted to open the potential of machine learning to developers of all skill levels, and to teams who don’t have exuberant budgets or armies of people to throw at a problem.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*740jO3V7dckLjWJ26e14tA.png" /><figcaption>Machina is sporting Veritone colours to celebrate the coming together of the two companies.</figcaption></figure><p>Our customers have built some very impressive and innovate things, and occasionally they have come to us asking, “Ok, we’ve integrated it, now how do we go into production?” — We didn’t have a great answer, until now.</p><p><a href="https://www.veritone.com/?utm_source=machinebox&amp;utm_medium=machinebox&amp;utm_campaign=machinebox&amp;utm_term=machinebox&amp;utm_content=machinebox">Veritone’s aiWARE operating system</a> is a production ready platform where customers can build workflows around their machine learning problems, and most Machine Box technology is already integrated and ready to use today.</p><p>Over the next few months, we’ll be working to more tightly integrate our technologies to offer a seamless experience and provide simple, turnkey solutions to big ML problems at scale.</p><p>At Machine Box we’ve been focussed on solving the ML pieces inside Docker containers, but Veritone has a much wider focus and more ambitious goals. Customers can pick and choose the right mix of services, APIs, UIs, and third-party integrations to best suit their needs.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0WLVIFYCILo75HwaRbK9QQ.jpeg" /><figcaption>Veritone’s CMS tools let you manage content and build sophisticated workflows around your content to solve real world problems with cutting edge technologies.</figcaption></figure><p>To learn more about Veritone, read about their <a href="https://www.veritone.com/ai-solutions/?utm_source=machinebox&amp;utm_medium=machinebox&amp;utm_campaign=machinebox&amp;utm_term=machinebox&amp;utm_content=machinebox">AI Solutions with Actionable Insights</a> or <a href="https://www.veritone.com/demo/?utm_source=machinebox&amp;utm_medium=machinebox&amp;utm_campaign=machinebox&amp;utm_term=machinebox&amp;utm_content=machinebox">book a demo</a> to have it all explained in person.</p><h3>What happens to my subscription?</h3><p><strong>Don’t worry, it’s business as usual.</strong> In fact, we’re working to see if we can take this opportunity to make our Machine Box offerings even better. Stay tuned for announcements on that front over the next few weeks.</p><p>Meanwhile, we hope this news gives you the confidence to push ahead with your projects, knowing that Machine Box is now well established and more stable than ever before.</p><h3>Questions?</h3><p>If you have any questions or concerns, please don’t hesitate to <a href="https://machinebox.io/contact">contact us</a>, or <a href="https://www.veritone.com/about/contact-us/">Veritone</a>.</p><h3>What is Machine Box?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPdHUaxzqp2dJYd0l_hwcA.jpeg" /></figure><p><a href="https://machinebox.io/?utm_source=JoiningVeritone&amp;utm_medium=JoiningVeritone&amp;utm_campaign=JoiningVeritone&amp;utm_term=JoiningVeritone&amp;utm_content=JoiningVeritone">Machine Box</a> puts state of the art <strong>machine learning</strong> capabilities into <strong>Docker containers</strong> so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.</p><p>The boxes are <strong>built for scale</strong>, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s <strong>way cheaper</strong> than any of the cloud services (<a href="https://hackernoon.com/which-face-recognition-technology-performs-best-c2c839eb04e7">and they might be better</a>)… and <strong>your data doesn’t leave your infrastructure</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=38eedfee7b66" width="1" height="1" alt=""><hr><p><a href="https://medium.com/machine-box/machine-box-is-growing-up-why-we-joined-veritone-38eedfee7b66">Machine Box is growing up: Why we joined Veritone</a> was originally published in <a href="https://medium.com/machine-box">Machine Box</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Deploy Docker containers in Google Cloud Platform]]></title>
            <link>https://medium.com/machine-box/deploy-docker-containers-in-google-cloud-platform-4b921c77476b?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/4b921c77476b</guid>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[hosting]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Sat, 26 May 2018 14:33:07 GMT</pubDate>
            <atom:updated>2018-09-08T07:19:21.452Z</atom:updated>
            <content:encoded><![CDATA[<p>TL;DR: Check out a <a href="https://github.com/machinebox/ops/tree/master/google-cloud-platform">working example project</a> of how to deploy Docker containers (boxes) to Google Cloud Platform.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*M2GsMAC_UosMGRnlFjHWZA.png" /></figure><p>Delivering complex software solutions inside a neat container is exactly the promise of Docker, and Machine Box has truly benefited from this.</p><p>This article shows you how to spin up <a href="https://machinebox.io/?utm_source=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_medium=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_campaign=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_term=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_content=matblog%20machine%20box%20in%20digital%20ocean">Machine Box</a> in Google Cloud Platform, but <strong>the technique works for any Docker image you want to deploy</strong>.</p><p>You will need:</p><ul><li><a href="https://cloud.google.com/sdk/">Google Cloud SDK</a> (the gcloud command)</li><li>Your favourite text editor</li><li>A terminal</li></ul><h3>1. Create an account and sign in</h3><p>Head over to the <a href="https://console.cloud.google.com/">Google Cloud Platform Console</a> and sign into it. You’ll need a Google account to agree to some terms and conditions etc.</p><h3>2. Create a project</h3><p>In the project dropdown, select <strong>Create new project</strong>. For first time users, you will likely be guided to do this with a wizard.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ngYPPIXol9sxV1XN_XCmUQ.png" /><figcaption>Creating a new project in Google Cloud Platform Console</figcaption></figure><p>Choose a project name and setup a billing account before clicking <strong>Create</strong>.</p><blockquote>TIP: Keep a note of the project ID which might differ from the project name.</blockquote><p>While your project is being setup, you can get the project ready for deployment.</p><h3>3. Your project files</h3><p>Create a folder on your development machine, ideally matching the name of the project you just created.</p><blockquote>TIP: These files are expected to go into your source code repository, but follow best practices about storing secrets in there.</blockquote><p>Create two text files called app.yaml and Dockerfile.</p><h4>Dockerfile</h4><p>The Dockerfile describes which image you want to add. In the simplest cast, just specify which image you want to generate:</p><pre>FROM machinebox/facebox</pre><p>In the case of Facebox, we also need to include the MB_KEY environment variable, which we can do in this file:</p><pre>FROM machinebox/facebox</pre><pre>ENV MB_KEY=YOUR_KEY_HERE</pre><blockquote>TIP: This is just a normal Dockerfile, so you can do whatever you need to here.</blockquote><h4>app.yaml</h4><p>The app.yaml file is where we will configure the deployment.</p><pre>runtime: custom<br>env: flex<br>service: default<br>threadsafe: yes</pre><pre>network:<br>forwarded_ports:<br>- 80:8080</pre><pre>automatic_scaling:<br>  min_num_instances: 1<br>  max_num_instances: 10<br>  cool_down_period_sec: 120 # default value<br>  cpu_utilization:<br>  target_utilization: 0.5</pre><pre>resources:<br>  cpu: 1<br>  memory_gb: 2<br>  disk_size_gb: 10</pre><pre># volumes:<br># - name: ramdisk1<br>#   volume_type: tmpfs<br>#   size_gb: 0.5</pre><p>This file describes an autoscaling image (between 1 and 10 instances at a time) with a single CPU, 2GB of RAM and a 10GB disk, forwarding the Machine Box (container) port 8080 to 80, so it’s accessible on the web.</p><blockquote>TIP: You can dive a bit deeper on what you can do with the app.yaml file by checking out the <a href="https://cloud.google.com/appengine/docs/flexible/custom-runtimes/configuring-your-app-with-app-yaml">official documentation</a>.</blockquote><h3>4. Deploy</h3><p>Deploy your first version with the following command line in the terminal:</p><pre>gcloud app deploy app.yaml -v v1</pre><p>After a while, the image will be deployed and available.</p><h3>5. Access your container</h3><p>Head over to https://your-project-id.appspot.com/ to access the image.</p><p>In our case, we see that the Facebox console is available:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RffHNbyY82YdID4BnnWmaQ.png" /><figcaption>Facebox running in Google Cloud Platform</figcaption></figure><h3>Introducing Machine Box</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPdHUaxzqp2dJYd0l_hwcA.jpeg" /></figure><p><a href="https://machinebox.io/?utm_source=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_medium=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_campaign=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_term=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_content=matblog%20machine%20box%20in%20digital%20ocean">Machine Box</a> puts state of the art <strong>machine learning</strong> capabilities into <strong>Docker containers</strong> so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.</p><p>The boxes are <strong>built for scale</strong>, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s <strong>way cheaper</strong> than any of the cloud services (<a href="https://hackernoon.com/which-face-recognition-technology-performs-best-c2c839eb04e7">and they might be better</a>)… and <strong>your data doesn’t leave your infrastructure</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4b921c77476b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/machine-box/deploy-docker-containers-in-google-cloud-platform-4b921c77476b">Deploy Docker containers in Google Cloud Platform</a> was originally published in <a href="https://medium.com/machine-box">Machine Box</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Deploy Docker containers into Digital Ocean]]></title>
            <link>https://medium.com/machine-box/deploy-machine-box-in-digital-ocean-385265fbeafd?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/385265fbeafd</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[digitalocean]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[deployment]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Wed, 23 May 2018 20:15:38 GMT</pubDate>
            <atom:updated>2018-06-15T06:41:54.547Z</atom:updated>
            <content:encoded><![CDATA[<p>Machine Box delivers the services via Docker containers, and Digital Ocean is a great place to host these containers in production.</p><p>This guide will walk you through the steps required to deploy Facebox with Digital Ocean.</p><blockquote>TIP: Before you start, it’s important to remember that hosting on Digital Ocean comes with a small cost — it’s very competitive, but it isn’t free.</blockquote><h4>1. Create a digital ocean account</h4><p>Head over to <a href="https://www.digitalocean.com/">https://www.digitalocean.com/</a> and create an account.</p><h4>2. Generate a personal access token</h4><p>In order for us to access Digital Ocean’s services via the API, we are going to need a <strong>Personal access token</strong>.</p><ul><li>Click the <strong>API </strong>navigation</li><li>In the <strong>Personal Access Tokens</strong> section, click <strong>Generate new token</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eD4xvTQdlDWGhl_ehP-uwQ.png" /><figcaption>You can do a lot via the Digital Ocean web UI: This is how you create <strong><em>Personal access tokens</em></strong>.</figcaption></figure><ul><li>Give your token a name, and click the <strong>Generate Token</strong> button</li></ul><p>Once it’s created, copy it and create an environment variable</p><h4>3. Create a droplet</h4><p>In Digital Ocean, the machines are called droplets. And we are going to need to create one so that we can run our Machine Box docker images.</p><p>We will use the docker-machine command, which should already be installed on your development machine if you have been working with Docker.</p><blockquote>TIP: Type docker-machine into a terminal, if you get an error you may need to <a href="https://docs.docker.com/machine/install-machine/">install it yourself</a>.</blockquote><p>Type this single line into a terminal remembering to replace the PERSONAL_ACCESS_TOKEN string with the Personal access token you created in Step 2:</p><pre>docker-machine create --digitalocean-size &quot;s-2vcpu-4gb&quot; --driver digitalocean --digitalocean-access-token PERSONAL_ACCESS_TOKEN facebox-prod-1</pre><ul><li>--digitalocean-size &quot;s-2vcpu-4gb&quot; creates an image with 2 CPUs and 4 GB of RAM — you can use any size you feel is appropriate</li><li>facebox-prod-1 is the name of our droplet</li></ul><p>All being well, this will create a droplet. You can verify this by typing:</p><pre>docker-machine ls</pre><p>You should see output something similar to:</p><pre>facebox-prod-1 * digitalocean Running tcp://152.63.254.5:2376 v18.05.0-ce</pre><h4>4. Connect to the new droplet</h4><p>To connect to the droplet, run the following command:</p><pre>eval $(docker-machine env facebox-prod-1)</pre><p>This is actually just setting some environment variables that the docker-machine and docker commands will use.</p><blockquote>TIP: You can run just docker-machine env facebox-prod-1 to see what the values are.</blockquote><h4>5. Start a Machine Box service</h4><p>Now we have our droplet running and we are connected to it, it is time to run one of the Machine Box boxes.</p><p>In this example, I am going to run <strong>Facebox</strong> — but you can run <a href="https://machinebox.io/?utm_source=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_medium=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_campaign=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_term=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_content=matblog%20machine%20box%20in%20digital%20ocean#boxes">any of the other boxes</a>.</p><pre>docker run -d -p 80:8080 -e &quot;MB_WORKERS=2&quot; -e &quot;MB_KEY=$MB_KEY&quot; machinebox/facebox</pre><p>This is slightly different to how you might run Facebox in development for a number of reasons:</p><ul><li>We are using the -d flag to tell Docker to run this as a daemon (background task) — you can omit this while you’re setting things up and your terminal will stay connected like during development</li><li>We are mapping the box port 8080 (where the services run) to port 80 — this is what makes it accessible on the web without specifying a port number</li><li>We are using the -e flag to set the MB_WORKERS environment variable to 2 — to match the number of CPUs we have</li></ul><blockquote>WATCH OUT: If you see an error complaining about the MB_KEY, ensure you have assigned the environment variable with the value from the <a href="https://machinebox.io/account?utm_source=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_medium=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_campaign=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_term=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_content=matblog%20machine%20box%20in%20digital%20ocean">Machine Box website</a>.</blockquote><h4>6. Get the machine’s IP address</h4><p>In the same terminal, run the following command to get the machine’s public IP address:</p><pre>docker-machine ip facebox-prod-1<br>152.65.253.2</pre><p>In a web browser, head over to that IP address to confirm that Facebox is indeed running:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lLtBDavjD_OFlwYSNbVSXQ.png" /><figcaption>After a few steps, Facebox is running in the cloud.</figcaption></figure><p>Now you can update your client code to use the new endpoint instead of localhost.</p><h3>That’s all folks</h3><p>We have successfully put Facebox into production in Digital Ocean.</p><p>Specifically we:</p><ol><li>Created a Digital Ocean account and obtained a Personal access token</li><li>Created a droplet to host our box</li><li>Spun up an instance of Facebox</li><li>Made the box publicly accessible</li></ol><h4>Turning the box off</h4><p>The easiest way to turn a box off is to sign into the Digital Ocean console and access the <strong>Droplets</strong> section, from there you can <strong>Destroy</strong> it using the UI.</p><h4>What next?</h4><p>Once you are up and running, you might want to think about the following:</p><ul><li>For scale and redundancy, consider using a <a href="https://www.digitalocean.com/products/load-balancer/">load balancer</a> to split traffic over <a href="https://blog.machinebox.io/how-to-configure-multiple-instances-of-facebox-890502ab8a43">many instances of Facebox</a></li><li>Secure the box using a <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-cloud-firewalls">firewall</a></li><li>You’ll need to do a little more work to support SSL/HTTPS — probably with an nginx image that proxies the traffic to the appropriate endpoints</li></ul><h3>More about Machine Box?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPdHUaxzqp2dJYd0l_hwcA.jpeg" /></figure><p><a href="https://machinebox.io/?utm_source=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_medium=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_campaign=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_term=matblog%20machine%20box%20in%20digital%20ocean&amp;utm_content=matblog%20machine%20box%20in%20digital%20ocean">Machine Box</a> puts state of the art <strong>machine learning</strong> capabilities into <strong>Docker containers</strong> so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.</p><p>The boxes are <strong>built for scale</strong>, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s <strong>way cheaper</strong> than any of the cloud services (<a href="https://hackernoon.com/which-face-recognition-technology-performs-best-c2c839eb04e7">and they might be better</a>)… and <strong>your data doesn’t leave your infrastructure</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=385265fbeafd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/machine-box/deploy-machine-box-in-digital-ocean-385265fbeafd">Deploy Docker containers into Digital Ocean</a> was originally published in <a href="https://medium.com/machine-box">Machine Box</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How I write Go HTTP services after seven years]]></title>
            <link>https://medium.com/@matryer/how-i-write-go-http-services-after-seven-years-37c208122831?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/37c208122831</guid>
            <category><![CDATA[web-services]]></category>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[https]]></category>
            <category><![CDATA[tips-and-tricks]]></category>
            <category><![CDATA[services]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Wed, 09 May 2018 15:08:52 GMT</pubDate>
            <atom:updated>2020-02-12T15:09:36.153Z</atom:updated>
            <content:encoded><![CDATA[<p>UPDATE: You can watch a video of me giving this talk at Gophercon 2019:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FrWBSMsLG8po%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DrWBSMsLG8po&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FrWBSMsLG8po%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/6f02b4131136eca9d1633fd2ecaa7a72/href">https://medium.com/media/6f02b4131136eca9d1633fd2ecaa7a72/href</a></iframe><p>An updated version of this post is available on the <a href="https://pace.dev/blog/2018/05/09/how-I-write-http-services-after-eight-years">Pace blog</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=37c208122831" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Forget A/B testing: Order elements using personalized machine learning recommendations in…]]></title>
            <link>https://medium.com/machine-box/forget-a-b-testing-order-elements-using-personalized-machine-learning-recommendations-in-7703233cc4ef?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/7703233cc4ef</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[recommendations]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[personalization]]></category>
            <category><![CDATA[html]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Wed, 18 Apr 2018 13:29:27 GMT</pubDate>
            <atom:updated>2018-04-18T13:29:27.203Z</atom:updated>
            <content:encoded><![CDATA[<h3>Forget A/B testing: Order elements using personalized machine learning recommendations in JavaScript</h3><p>There are some great tools out there for A/B testing that let you automatically present a selection of different experiences — whether layout, colors, fonts, or content. They measure which is most effective, and you make a final decision on which <em>one </em>to go with.</p><blockquote>But what if we didn’t have to settle for one experience? What if we could use Machine Learning to learn which experience each user is most likely to engage with, and present different experiences to different kinds of people.</blockquote><h3>Personalised experience</h3><p>Let’s say we have a list of choices to present to the user. We want to present the most relevant content first so that our user doesn’t get bored and instead engages with our app or website.</p><p>The choices could be anything — news articles, songs, pictures, products, tweets, or whatever your particular data is.</p><blockquote>The most important thing about applying Machine Learning is to think carefully about your goals — what do you want to achieve? While it can be fun, applying new tech for its own sake is not awesome.</blockquote><blockquote>Given these choices, we want to change the order based on the persona of the visitor.</blockquote><p>For choices A to F, they would be displayed differently for these three groups of users:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*smjnEezONNYoW7gkUpervw.png" /><figcaption>Each type of user gets their own experience — the items are ordered based on what each user is most likely to click on.</figcaption></figure><h3>Machine Learning can learn about your users</h3><p>The first thing we need to do is learn about our users.</p><p>We can start by randomly changing the order of the choices and tracking engagement (clicks probably) just like A/B testing does.</p><p>The difference is we will also capture some measurable data about the user too, which can start to form a model of their interests. For example, we might know their age, or location, or previous purchasing history. The goal here is to think about what properties might be important when tailoring the experience.</p><blockquote>Things don’t have to get creepy — use data that the user knows you have and doesn’t mind you using. And always let them at least opt-out, if not having everyone opt-in.</blockquote><p>We might also decide to give the model other inputs, such as the time of day, or even the current weather, if they are likely to influence our users’ decisions.</p><h4>The learning applies to other users too</h4><p>The insights we get from our users can be applied to other users, and even to brand new users who have never used our app before.</p><p>For example, given the following simple table of data about some users:</p><pre>Name    City     AgeGroup   Loved<br>--------------------------------------------------------------------<br>Mat     London   30-40      The Matrix + 28 Days Later<br>David   London   30-40      The Matrix + 28 Days Later<br>Piotr   Warsaw   20-30      The Matrix + Jack Strong<br>Paweł   Warsaw   20-30      The Matrix + Jack Strong<br>Bill    London   60-70      Jack Strong + Das Boot</pre><p>What films would you suggest to Piotr and Paweł?</p><p>Given their age group, and the fact that they liked The Matrix, it is probably sensible to recommend 28 Days Later to them.</p><p>If a new user comes along with the following properties, which films would you suggest?</p><pre>Name    City     AgeGroup   Loved<br>--------------------------------------------------------------------<br>Peter   London   60-70      ?</pre><p>Since he lives in London and is in the same age group as Bill, we might decide to recommend Jack Strong and Das Boot.</p><p>Of course, in the real world it’s nowhere near this simple and the patterns are likely to be much more nuanced — never mind when you introduce any kind of big data scale.</p><p>This is where Machine Learning can do a better job than humans.</p><h4>Reward the model</h4><p>Once we can make predictions, we need to track whether they’re successful or not.</p><p>Whenever we get something right (like a user clicks a choice, or watches a movie, or buys a product) we will reward the model to reinforce the learning.</p><p>Our model will then notice patterns, and make better predictions in the future.</p><h3>Meet Suggestionbox</h3><p>Suggestionbox is a tool from Machine Box that provides a Machine Learning model for this very use case. You use a simple JSON API to interact with the box.</p><p>Typing this single line into a terminal will download and run Suggestionbox for you (assuming you have <a href="https://docs.docker.com/install/#desktop">installed Docker</a>):</p><pre>docker run -p 8080:8080 -e &quot;MB_KEY=$MB_KEY&quot; machinebox/suggestionbox</pre><blockquote>If you don’t have an MB_KEY — which you need to unlock the box — you can grab one from the <a href="https://machinebox.io/account?utm_source=matblog-18Apr2018&amp;utm_medium=matblog-18Apr2018&amp;utm_campaign=matblog-18Apr2018&amp;utm_term=matblog-18Apr2018&amp;utm_content=matblog-18Apr2018">Machine Box website</a>.</blockquote><p>We will use it to build a little demo to show personalization working.</p><h3>Create a model</h3><p>While you can create models via the API, it’s much easier to head to http://localhost:8080 and create the model using the Console.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*M0pVGpImdqqRi3uS0HhpAg.png" /><figcaption>Suggestionbox ships with a built-in UI that lets you create models and run simulations.</figcaption></figure><p>I am going to create a model called <strong>Genres</strong>, with five choices.</p><h3>Make predictions</h3><p>Our model is now ready to start making predictions.</p><p>Of course, they’re not going to be very well informed initially, but very quickly Suggestionbox will notice patterns in the rewards, and we’ll start to see the predictions get better and better.</p><h4>Try the free simulator</h4><p>Suggestionbox ships with a built-in simulator, so you can actually simulate real user activity to see how the model might take shape. If you want to try this, you can do so from the Console at http://localhost:8080/console.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ANcuv5guWd0u2hgaBGbeEw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3jx-mzlJP3I4lG3YGES-4Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Dj4WbSkQm5bsMJ2dIqdLig.png" /><figcaption>The Suggestionbox simulator lets you simulate real traffic to see how your model might perform.</figcaption></figure><h3>Wiring up our web page</h3><p>There are two things our JavaScript needs to do:</p><ol><li>Ask the model for a prediction, and use that prediction to decide on the order of elements on our page,</li><li>When the user successfully interacts with an element, reward the model so it can learn.</li></ol><p>Assuming we had a user object that contained some relevant properties:</p><pre>var user = {<br>    age: 30,<br>    city: &quot;London&quot;,<br>    interests: [&quot;music&quot;, &quot;movies&quot;, &quot;politics&quot;]<br>}</pre><blockquote>CAUTION: The JavaScript on this page is simple and bearbones — you should use whatever UI technology you’re most familiar with instead.</blockquote><h4>Make a prediction</h4><p>To make a prediction, we might do something like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/438ce184a74194cfce6c71746e358c38/href">https://medium.com/media/438ce184a74194cfce6c71746e358c38/href</a></iframe><p>This code turns our user object into a prediction request and makes an AJAX request to the /suggestionbox/models/{model_id}/predict endpoint.</p><blockquote>The makeRequest helper can be replaced with the $.ajax call in jQuery, or whatever remote data API your UI framework provides.</blockquote><p>The results will come back looking something like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6b789cc47bf691bb45d0e26807dac00f/href">https://medium.com/media/6b789cc47bf691bb45d0e26807dac00f/href</a></iframe><p>Each choice is mentioned with an order and a score. The order is what we care about most, because that is the order we need to present the choices to the user.</p><blockquote>The score is useful for debugging, but you shouldn’t order by it. Occasionally Suggestionbox will try novel things to see if there is any more learning it can do — in these cases, the first element in the array won’t necessarily have the highest score.</blockquote><p>The reward_id values are used to issue rewards if the user engages with any of these options.</p><h4>Reorder the elements</h4><p>Assuming we have the elements in a container, we can just append them back to it to control the order.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/96c36be1112be7d1d23dcc071676600c/href">https://medium.com/media/96c36be1112be7d1d23dcc071676600c/href</a></iframe><p>If elements are being appended to the container they’re currently in, they’ll essentially just move to the end. We can use this to easily set the order just by iterating over the choices:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cfdc6aa8990f6e462f947ae2d4a6a3eb/href">https://medium.com/media/cfdc6aa8990f6e462f947ae2d4a6a3eb/href</a></iframe><p>At the same time, we are going to capture the reward IDs in an object keyed by the choice ID. This will make it easier to lookup reward IDs later.</p><h4>Reward the model</h4><p>When the user clicks one of the choices, we’ll call this function which will make an AJAX request to reward the model:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/86bd973be9d8816629011786529a9940/href">https://medium.com/media/86bd973be9d8816629011786529a9940/href</a></iframe><p>This function just creates a reward request object that contains the appropriate ID (we look it up via our rewardIDs object) and a value which is 1 for most cases.</p><p>We make a POST request to /suggestionbox/models/{model_id}/rewards, before going about our business.</p><blockquote>The model will learn that it did a good thing when it suggested that choice, and this is how it improves over time.</blockquote><h4>Full example</h4><p>See a full example of this by checking out the <a href="https://github.com/machinebox/toys/tree/master/suggestpage">suggestpage toy on GitHub</a>.</p><h3>Need help implementing something like this?</h3><p>Our team hang out all day in the <a href="https://machineboxslack.herokuapp.com/">Machine Box Community Slack which you can get an invitation to today</a>.</p><h3>What is Machine Box?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPdHUaxzqp2dJYd0l_hwcA.jpeg" /></figure><p><a href="https://machinebox.io/?utm_source=matblog-18Apr2018&amp;utm_medium=matblog-18Apr2018&amp;utm_campaign=matblog-18Apr2018&amp;utm_term=matblog-18Apr2018&amp;utm_content=matblog-18Apr2018">Machine Box</a> puts state of the art <strong>machine learning</strong> capabilities into <strong>Docker containers</strong> so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.</p><p>The boxes are <strong>built for scale</strong>, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s <strong>way cheaper</strong> than any of the cloud services (<a href="https://hackernoon.com/which-face-recognition-technology-performs-best-c2c839eb04e7">and they might be better</a>)… and <strong>your data doesn’t leave your infrastructure</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7703233cc4ef" width="1" height="1" alt=""><hr><p><a href="https://medium.com/machine-box/forget-a-b-testing-order-elements-using-personalized-machine-learning-recommendations-in-7703233cc4ef">Forget A/B testing: Order elements using personalized machine learning recommendations in…</a> was originally published in <a href="https://medium.com/machine-box">Machine Box</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Detect fake news by building your own classifier]]></title>
            <link>https://medium.com/machine-box/detect-fake-news-by-building-your-own-classifier-31e516418b1d?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/31e516418b1d</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[fake-news]]></category>
            <category><![CDATA[classification]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[golang]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Fri, 30 Mar 2018 15:05:46 GMT</pubDate>
            <atom:updated>2018-04-17T14:44:02.973Z</atom:updated>
            <content:encoded><![CDATA[<h3>Build your own fake news detector using machine learning</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Tj8WDivxpm5KoToeFNJsNQ.jpeg" /></figure><p>It is a sign of the times that in 2018, the <a href="http://www.bbc.co.uk/news/uk-politics-42791218">UK Government established a new unit to tackle fake news</a>, and every day seems to reveal more about the dirty tricks played by companies like Cambridge Analytica, including <a href="https://www.theguardian.com/uk-news/2018/mar/19/cambridge-analytica-execs-boast-dirty-tricks-honey-traps-elections">deliberately spreading misinformation</a>, to try and influence electorates in favour of whoever happens to be paying them.</p><p>This is a problem because it hands more power to those with money — people and groups who already have plenty — and away from ordinary people, who democracy is supposed to serve. Whether our political preferences happen to be served by recent fake news or not, everybody should be concerned about this.</p><p>It is also becoming increasingly difficult to doubt that certain foreign states are even interfering in elections and referendums of other countries in a significant way, by spreading fake news to those who are most willing to receive it.</p><p>The recent advances in the availability of artificial intelligence presents us with an opportunity to tackle this problem at scale.</p><h3>Machine learning classifiers</h3><p>Classifiers are machine learning models that can take a set of examples and learn which class each example falls into. You can then ask it to classify content that it has never seen before, with impressive accuracy.</p><p>For example, let’s say we want to teach a model to classify neutral vs biased article titles. We would gather as many examples of each class as we could:</p><pre><strong>class    example</strong><br>neutral  China&#39;s space lab set for fiery re-entry<br>biased   10 Reasons Progressive Liberals Should Vote for Trump<br>neutral  Thousands of violent crime suspects released<br>biased   Right wing people are stupid<br>neutral  New £30m fund to help rough sleepers</pre><p>Then we would use a technology like <a href="https://machinebox.io/docs/classificationbox?utm_source=matblog-fakenews&amp;utm_medium=matblog-fakenews&amp;utm_campaign=matblog-fakenews&amp;utm_term=matblog-fakenews&amp;utm_content=matblog-fakenews">Classificationbox</a> to train a model based on these inputs.</p><p>How the model learns is very complicated, and extremely difficult to understand if you try and poke around at the internal data structures it uses, but we know it works because we can test it by sending in some of our examples without providing the class, and asking it to make a prediction.</p><p>The number of these test predictions it gets right is how we measure accuracy.</p><h4>Who decides what’s neutral or biased?</h4><p>It is unlikely we will all agree on the classes in the examples above, so how do we decide what’s neutral and what’s biased?</p><p>The model is only as good as the training data, and it will indeed be biased based on who gathered that data. It would of course be possible to train a model that thinks heavily biased statements are neutral, and vice versa.</p><p>In a real world situation, the answer is probably to have the training data open, so that everybody can contribute to, and maintain it. Otherwise you have to trust the people who trained the models.</p><p>Machine Box provides <a href="https://machinebox.io/docs/fakebox?utm_source=matblog-fakenews&amp;utm_medium=matblog-fakenews&amp;utm_campaign=matblog-fakenews&amp;utm_term=matblog-fakenews&amp;utm_content=matblog-fakenews">Fakebox</a>, a fake news classifier trained with significant datasets based on common sense classification of news articles. Any articles that could fairly be considered only slightly biased were not included, so the model does a good job in most people’s eyes.</p><p>In this article, we will see how we can build our own Fakebox alternative, using our own training data using Classificationbox.</p><h3>Prepare training data</h3><p>The simplest way to prepare training data is to create a folder — if you’re doing this as a team, then a shared folder might work best (like DropBox, or Google Drive, or some internal network location) so everybody can contribute.</p><p>Create a subfolder for each class, for example:</p><pre>/training-data<br>    /biased<br>    /neutral<br>    /satire<br>    /junksci</pre><p>Then put each example into a text file inside the appropraite folder, like this:</p><pre>/training-data<br>    /biased<br>        biased-example1.txt<br>        biased-example2.txt<br>        biased-example3.txt</pre><h4>Balance the classes</h4><p>One key principle is that each class should have more or less the same number of examples. If you have more examples of junk science than anything else, the model will likely be biased towards that class.</p><h4>How much data?</h4><p>The best number of examples differs widely depending on a number of factors, but you should start with at least 100 examples in each class, and go from there.</p><h3>The teaching approach</h3><p>The best way to teach the classifier, is to take 80% of your example data and teach it.</p><p>In Classificaitonbox, this can be done with a simple HTTP POST request:</p><pre>POST /classificationbox/models/1/teach<br>{<br>  &quot;class&quot;: &quot;biased&quot;,<br>  &quot;inputs&quot;: [<br>    {&quot;key&quot;: &quot;content&quot;, &quot;type&quot;: &quot;text&quot;, &quot;value&quot;: &quot;...content...&quot;}<br>  ]<br>}</pre><blockquote>The slightly verbose inputs array and key/type/value objects exist because you can actually teach classifiers with a range of different data types, including numbers and even images.</blockquote><p>Then take the remaining 20% of the data, and ask Classificationbox to try and guess which class it belongs in. You do this using a predict request:</p><pre>POST /classificationbox/models/1/predict<br>{<br>  &quot;inputs&quot;: [<br>    {&quot;key&quot;: &quot;content&quot;, &quot;type&quot;: &quot;text&quot;, &quot;value&quot;: &quot;...content...&quot;}<br>  ]<br>}</pre><p>Notice that we <strong>do not</strong> provide the class in this request.</p><blockquote>Classificationbox also supports multiple languages, which you can specify by using the language code in the type, for example test_sp for Spanish language.</blockquote><h4>Meet the textclass tool</h4><p>The <a href="https://github.com/machinebox/toys/tree/master/textclass">textclass</a> tool is a simple Go program that walks a folder structure like the one described above, and performs the teaching of Classificationbox for you.</p><p>You can install it (assuming you have Go installed) by popping this into a terminal:</p><pre>go get github.com/machinebox/toys/textclass</pre><h3>Run Classificaitonbox</h3><p>You can run Classificationbox for free by going to your terminal, and doing:</p><pre>$ docker run -p 8080:8080 -e &quot;MB_KEY=$MB_KEY&quot; <br>         machinebox/classificationbox</pre><ul><li>You’ll need to have Docker installed</li><li>If you don’t have an MB_KEY, you can get one for free from the <a href="https://machinebox.io/account?utm_source=matblog-fakenews&amp;utm_medium=matblog-fakenews&amp;utm_campaign=matblog-fakenews&amp;utm_term=matblog-fakenews&amp;utm_content=matblog-fakenews">Machine Box website</a></li></ul><h3>Train and validate</h3><p>In a terminal, run the textclass tool:</p><pre>textclass -src /path/to/training-data</pre><p>You will be prompted a few times to hit Y to confirm what the tool will do:</p><pre>Classes<br> — — — -<br>fake: 300 item(s)<br>real: 300 item(s)<br>satire: 300 item(s)</pre><pre>Create new model with 3 classes? (y/n): y<br>new model created: 5abe0d3302484439<br>Teach and validate Classificationbox with 720 (80%) random items? (y/n): y</pre><p>After some time, you will be presented with the results:</p><pre>Validation complete</pre><pre>Correct:    173<br>Incorrect:  7<br>Errors:     0<br>Accuracy:   96%</pre><p>So we now have a classifier that can make predictions, with 96% accuracy.</p><h3>Put into production?</h3><p>If you want to put your model into production, you can — since Classificationbox is just a Docker container, you can save the state file, and use it when spinning up new instances in your own environment, or in the cloud.</p><p>If you share the same state file with multiple instances of Classificationbox, you can load balance the traffic to achieve planet scale.</p><h4>Need help?</h4><p>We hang out all day in the <a href="https://machineboxslack.herokuapp.com/">Machine Box Community Slack, and you’re invited</a> to join us to ask questions, or tell us about what you’ve built.</p><h3>What is Machine Box?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPdHUaxzqp2dJYd0l_hwcA.jpeg" /></figure><p><a href="https://machinebox.io/?utm_source=matblog-fakenews&amp;utm_medium=matblog-fakenews&amp;utm_campaign=matblog-fakenews&amp;utm_term=matblog-fakenews&amp;utm_content=matblog-fakenews">Machine Box</a> puts state of the art <strong>machine learning</strong> capabilities into <strong>Docker containers</strong> so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.</p><p>The boxes are <strong>built for scale</strong>, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s <strong>way cheaper</strong> than any of the cloud services (<a href="https://hackernoon.com/which-face-recognition-technology-performs-best-c2c839eb04e7">and they might be better</a>)… and <strong>your data doesn’t leave your infrastructure</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=31e516418b1d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/machine-box/detect-fake-news-by-building-your-own-classifier-31e516418b1d">Detect fake news by building your own classifier</a> was originally published in <a href="https://medium.com/machine-box">Machine Box</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using Machine Learning to help us communicate better]]></title>
            <link>https://medium.com/@matryer/using-machine-learning-to-help-tech-pr-cks-communicate-better-258b16f7daf7?source=rss-f25c357b8e4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/258b16f7daf7</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[community]]></category>
            <dc:creator><![CDATA[Mat Ryer]]></dc:creator>
            <pubDate>Sat, 17 Mar 2018 19:21:20 GMT</pubDate>
            <atom:updated>2018-05-15T17:05:10.845Z</atom:updated>
            <content:encoded><![CDATA[<p>My favourite experience on an interview panel went something like this:</p><pre>Someone: How would you sort this list?<br>   Them: I don’t know<br>     Me: How would you find out?<br>   Them: I’d probably google it, or look on Stack Overflow</pre><p>Correct answer; we offered them the job.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/250/1*hfgn8k3-k5dZ9D6iBlcWGA.jpeg" /></figure><h4>Online communities need to be welcoming</h4><p>Using online resources like Stack Overflow is a great way to solve problems and learn things that you need to know for your particular task in hand, not just for beginners, but especially for them.</p><p>But sometimes, they can be harsh places to be.</p><p>Yesterday <a href="https://twitter.com/aprilwensel">April Wensel</a> tweeted some helpful alternatives to some pr*ckish comments made on Stack Overflow:</p><h3>April Wensel on Twitter</h3><p>This was just posted by a guy with 100K+ rep on a beginner StackOverflow question. EQ note: Even this kind of subtly condescending comment can be very discouraging. The words &quot;exactly&quot; and especially &quot;clearly&quot; are warning signs in this context.</p><p>April has a rare superpower of somehow being able to communicate the same information without sounding like an absolute _________ (insert your own word there).</p><h3>April Wensel on Twitter</h3><p>Consider this easy alternative: &quot;In order to help you find the error in line 49, we&#39;ll need to see the rest of the code. Could you add it to your question?</p><p>I realise the irony in childishly calling this person names, but there is a serious point underneath. People new to tech are already overwhelmed by the sheer size of the knowledgebase. They tend to assume that senior tech people know everything, and that makes them feel like they’re ever further away from where they need to be.</p><p>The truth is that senior people in tech only know a tiny fraction — but they probably know <em>how to figure things out</em>. One of the best ways is still through communities like Stack Overflow, and if they’re to have any value at all, it’s vital that these places remain open and welcoming to everybody.</p><p>If we give the anonymous commenter above the benefit of the doubt, we might conclude that they don’t realise how their comments come across, or maybe they’re one of those people who prides themselves on being blunt and forthright at all costs. Either way, shouldn’t we use technology to help us be better humans?</p><p>I replied with this joke:</p><h3>Mat Ryer on Twitter</h3><p>@aprilwensel Let&#39;s collect examples of courteous and helpful comments and examples of &#39;the other kind&#39; of comments and train a classifier. People can use it to test if their reply is likely to make them look like a prick or not. #JokingNotJoking</p><p>But then I realised that you could actually do this pretty easily with Classificationbox.</p><blockquote><a href="https://machinebox.io/docs/classificationbox?utm_source=matblog-language-classifier&amp;utm_medium=matblog-language-classifier&amp;utm_campaign=matblog-language-classifier&amp;utm_term=matblog-language-classifier&amp;utm_content=matblog-language-classifier">Classificationbox</a> is a build-your-own machine learning classifier in a Docker container.</blockquote><h4>Your models are only as bad as your training data</h4><p>The tricky bit is finding the training data. We’d need to figure out a way to collect enough examples of the two classes of text in order to teach a model to be able to tell the difference.</p><p>I’m sure there are some smart ways of scraping this data from the online communities, or in the worst case we could manually collect it — perhaps as a community. Either way, the goal would be to get about 500 examples of each.</p><p>Classifiers like Classificationbox work better when the classes are balanced (i.e. they have the same number of examples) so it doesn’t get biased.</p><p>You should also train the classifier in a balanced way by distributing the order in which examples are taught, rather than teaching all the examples from one class, followed by all the examples from the other.</p><h3>Teaching</h3><p>Machine Box is a little different to traditional machine learning technology in that it allows you to teach in a very light an inexpensive way, on CPUs rather than GPUs, and continuously — so you can improve the models over time without any retraining steps.</p><p>First we’d create a model with a request like this:</p><pre>POST http://localhost:8080/classificationbox/models<br>{<br>  &quot;id&quot;: &quot;s1&quot;,<br>  &quot;name&quot;: &quot;Sentiment&quot;,<br>  &quot;options&quot;: {<br>    &quot;ngrams&quot;: 1,<br>    &quot;skipgrams&quot;: 1<br>  },<br>  &quot;classes&quot;: [<br>    &quot;pleasant&quot;,<br>    &quot;unpleasant&quot;,<br>  ]<br>}</pre><p>The classes array lists the two classes of comments we are going to teach, while ngrams and skipgrams can be used to fine tune the language processors.</p><p>Then, to teach a pleasant example, we’d make a request like this:</p><pre>POST http://localhost:8080/classificationbox/models/s1/teach<br>{<br>  &quot;class&quot;: &quot;pleasant&quot;,<br>  &quot;inputs&quot;: [<br>    {&quot;key&quot;: &quot;comment&quot;, &quot;type&quot;: &quot;text&quot;, &quot;value&quot;: &quot;comment here&quot;}<br>  ]<br>}</pre><p>And an unpleasant one:</p><pre>POST http://localhost:8080/classificationbox/models/s1/teach<br>{<br>  &quot;class&quot;: &quot;unpleasant&quot;,<br>  &quot;inputs&quot;: [<br>    {&quot;key&quot;: &quot;comment&quot;, &quot;type&quot;: &quot;text&quot;, &quot;value&quot;: &quot;comment here&quot;}<br>  ]<br>}</pre><p>The inputs are just a slightly complicated way of saying that the comment text field value is comment here.</p><p>The additional complication comes from the fact that Classificationbox supports a myriad of data types:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_dz5RTQdPYi630dZGVnjmg.png" /><figcaption>Classificationbox feature types let you build classifers based on numbers, text and even images!</figcaption></figure><h3>Making predictions</h3><p>Once we’ve trained our model, we could then use it to make a prediction by providing a new set of inputs:</p><pre>POST http://localhost:8080/classificationbox/models/s1/predict <br>{<br>  &quot;limit&quot;: 1,<br>  &quot;inputs&quot;: [<br>    {&quot;key&quot;: &quot;comment&quot;, &quot;type&quot;: &quot;text&quot;, &quot;value&quot;: &quot;new comment here&quot;}<br>  ]<br>}</pre><p>The model will (surprisingly quickly) do its best to predict which of the two classes this new comment belongs in, based on what it learned from our examples.</p><p>The response gives us its answer:</p><pre>{<br>  &quot;success&quot;: true,<br>  &quot;classes&quot;: [<br>    {<br>      &quot;id&quot;: &quot;pleasant&quot;,<br>      &quot;score&quot;: 0.94<br>    }<br>  ]<br>}</pre><p>In this case, we can see that the model considers the comment to be pleasant.</p><p>Tech people could use this model to help them figure out if what they’re about to post is pleasant or not.</p><h3>Thought police?</h3><p>Like with any technology this can be used or abused. As a firm believer in freedoms of speech, I would hate to see this technique employed as a moderator, blocking input that didn’t adhere to its standards, like some kind of snowflake terminator. In the case of children I could see a good argument for doing this, or at least providing it as an option for parents.</p><p>But people who are able to communicate without upsetting people are often better communicators, and who doesn’t want to be a better communicator?</p><h3>What is Machine Box?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPdHUaxzqp2dJYd0l_hwcA.jpeg" /></figure><p><a href="https://machinebox.io/?utm_source=matblog-language-classifier&amp;utm_medium=matblog-language-classifier&amp;utm_campaign=matblog-language-classifier&amp;utm_term=matblog-language-classifier&amp;utm_content=matblog-language-classifier">Machine Box</a> puts state of the art <strong>machine learning</strong> capabilities into <strong>Docker containers</strong> so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.</p><p>The boxes are <strong>built for scale</strong>, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s <strong>way cheaper</strong> than any of the cloud services (<a href="https://hackernoon.com/which-face-recognition-technology-performs-best-c2c839eb04e7">and they might be better</a>)… and <strong>your data doesn’t leave your infrastructure</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=258b16f7daf7" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>