Latest Posts (10 found)
Fredrik Meyer 1 months ago

What I self host

I’ve always liked reading blogs, and have used several feed readers in the past (Feedly, for example). For a long time I was thinking it would be fun to write my own RSS reader, but instead of diving into the challenge, I did the next best thing, which was finding a decent one, and learning how to self host it. In this post I will tell about the self hosting I do, and end by sketching the setup. Miniflux is a “minimalist and opinionated feed reader”. I host my own instance at https://rss.fredrikmeyer.net/ It is very easy to set up using Docker, see the documentation . I do have lots of unread blog posts 🤨. I host a Grafana instance, also using Docker. What first triggered me to make this instance was an old project (that I want to revive one day): I had a Raspberry Pi with some sensors measuring gas and dust at my previous apartment, and a Grafana dashboard showing the data. It was interesting seeing how making food at home had a measurable impact on volatile gas levels. Later I discovered the Strava datasource plugin for Grafana. It is a plugin that lets Grafana connect to the Strava API, and gives you summaries of your Strava activities. Below is an example of how it looks for me: One gets several other dashboards included in the plugin. One day YourSpotify was mentioned on HackerNews. It is an application that connects to the Spotify API, and gives you aggregated statistics of artists and albums you’ve listened to over time (why they chose to store the data in MongoDB I have no idea of!). It is interesting to note that I have listened to less and less music over the years (I have noticed that the more experience I have at work, the less actual programming I do). Because I didn’t bother setting up DNS, this one is only exposed locally, so I use Tailscale to be able to access YourSpotify. This works by having Tailscale installed on the host, and connecting to the Tailnet. It lets me access the application by writing in the browser. I have a problem with closing tabs, and a tendency to hoard information (don’t get me started on the number of unread PDF books on my Remarkable!). So I found Linkding, a bookmark manager, which I access at https://links.fredrikmeyer.net/bookmarks/shared . In practice it is a grave yard for interesting things I never have the time to read, but it gives me peace some kind of peace of mind. I have an ambition of making the hosting “production grade”, but at the moment this setup is a mix of practices of varying levels of quality. I pay for a cheap droplet at DigitalOcean , about $5 per month, and an additional dollar for backup. The domain name and DNS is from Domeneshop . SSL certificates from Let’s Encrypt . All the apps run in different Docker containers, with ports exposed. These ports are then listened to by Nginx, which redirects to HTTPS. I manage most of the configuration using Ansible. Here I must give thanks to Jeff Geerling ’s book Ansible for DevOps , which was really good. So if I change my Nginx configuration, I edit it on my laptop, and run to let Ansible do its magic. In this case, “most” means the Nginx configuration and Grafana. Miniflux and YourSpotify are managed by simply doing and running on the host. Ideally, I would like to have a 100% “infrastructure as code” approach, but hey, who has time for that! It would be nice to combine AI and user manuals of house appliances to make an application that lets you ask questions like “what does the red light on my oven mean?”. Or write my own Jira, or… Lots of rabbit holes in this list on Github. Until next time!

0 views
Fredrik Meyer 5 months ago

All the ways I use AI

I had some very nice experiences with Claude Code recently, and I realized it would be fun to write down all the ways I use AI today (highly likely it will all change within the next year!). For context, I am a software developer, these days I mainly use Kotlin and Next.js at work. I use Raycast as a Spotlight replacement on Mac. I have bound TAB to its “Quick AI” feature (which at the moment uses GPT-4o). I use it a lot for quick questions that don’t require a conversation. This has for a large part replaced Google searches for me. Sometimes I just write “postgres literal json array”, and I get the right response faster than a Google search. (for some reason it doesn’t support multi-line chat questions, which is a bit annoying with code questions…) I have had a ChatGPT subscription for a while now. I use it for longer conversations and random questions. I have tried the conversational feature, but I prefer writing. Common uses for me are recipe suggestions when shopping groceries, explaining words, et cetera. I have a Garmin watch, and I use GarminDB to export activity and heart rate data to a local SQLite database. I found a JDBC MCP Server , and I wanted to see what the AI could do with the data. I was impressed, see the picture. After reading and understanding the database, it produced a HTML file with a nice plot. Junie was the first agentic AI I tried (I got access to the early preview program). I used it to help me learn OpenGL in Java (which may be a topic for a future blog post). See this repo for some example commits (those with “thx Junie” in the commit message). I have also used it to write integration tests that requires a lot of setup or boilerplate code. It understands the Java eco system quite well, and the generated code is quite good. Unfortunately it couldn’t browse the internet last time I tried it, so it will often pull old versions of libraries. It doesn’t seem to have access to IntelliJ’s refactoring functionality (it edits one file at a time). I have tried Claude Code a few times. At the moment I’m using an API key to connect (which according to Google searches is probably more expensive if I were using it a lot). With Claude Code I could go a lot further with exploring the data from my Garmin watch. I asked it to use the R programming language to analyze and plot the data, and summarize everything in a PDF using LaTeX. On question I wanted to answer was if bad sleep affected heart rates the following day. By analyzing the sleep scores and comparing with heart rates, it produced the plot on the right side. We see a fairly clear association between bad sleep and average heart rate (it turns out the opposite question also is true: higher heart rate (stress) leads to worse sleep). Another tool is Simon Willison’s CLI tool . The last six months I’ve gotten the habit of ending every work day by writing a few short sentences about what I did that day. I can copy all the days to the clipboard, and ask an LLM questions about how I’ve worked. For example: The tool is also quite useful when in offline environments. It supports using models from Ollama , running directly on my Mac. It is slower, but very useful when I don’t remember some standard library functions, or some awkward syntax. Google’s Notebook LM lets you upload PDF’s and you can ask questions about it. Its most famous feature is that it can make a podcast of the contents one has uploaded. I did this with my PhD thesis , and it was quite fun to hear. The most noticable thing is that the hosts are extremely positive about simple ideas. I want to test more AI packages in Emacs, for example . It would also be fun to try to write some AI agent at some point, for example using JetBrain’s Koog or Pydantic’s framework for Python . For software engineering, AI is very good, used right. I find that for autocompletion it can sometimes be more distracting than useful. Good uses include: Bad (or worse) uses include: Code review Spotting bugs Writing boring code Big tasks without clear boundaries Illustrations for presentations Writing prose. (actually, I really dislike this use: it is good for correcting prose or suggesting change, but AI-written text is bad for sooo many reasons.)

0 views
Fredrik Meyer 1 years ago

Estimating Pi with Kafka Streams

Recently I wanted to learn a bit about Apache Kafka . It is often used as a way to do event sourcing (or similar message-driven architectures). An “add-on” to the simple publish/subscribe pattern in Kafka is Kafka Streams , which provides ways to process unbounded data sets. I wanted to write something slightly more complicated than the examples in the Kafka documentation. Back in the day we all learned the formula $A=\pi r^2$ for the area of a circle. We place the circle of radius 1 inside the unit square $[-1,1]^2$ (see the picture). The unit square has an area of $2 \times 2 = 4$, so the area of the circle is $\pi \cdot r^2 = \pi \cdot 1^2 = \pi$. The circle covers $\pi / 4 \approx 0.7853$ of the square’s area. One (silly, very bad ) way to estimate the area of the circle is to sample random points inside the square, and count how many of them land inside the circle. We end up with this formula: [ \lim_{n \to \infty} \frac{ \#( \text{hits inside the circle} )}{n} = \large\pi ] We need two ingredients to estimate Pi this way: To decide if a given point is inside a circle, we just check that its absolute value is less than one: [ (x,y) \mapsto x^2+y^2 \leq 1 ] We realize now that this algorithm can be implemented in 25 lines of Python (with plenty of spacing), but let us use our skills as engineers to over-engineer instead. Glossing over details, Kafka consists of topics and messages. Clients can publish and subscribe to these topics. It also promises “high throughput”, “scalability”, and “high availability”. Since I’m doing this on my laptop, none of this applies. Kafka Streams is a framework for working with the Kafka messages as a stream of events, and provides utilities for real time aggregation. Below is a high-level overview of the Kafka streams topology that we will build: We produce a stream of random pairs of numbers to a Kafka topic named . Then, in the aggregate step, we use a state store to keep track of the numbers of hits so far (together with the total number of random points processed). The code to build the topology is quite simple: We consume input from the topic (making it into a stream). Then we use the method to calculate a running estimation of π. Finally, we output the estimation to the Kafka topic . We also output the relative error to another topic. The code for calculating the estimate is also quite short: The class is just a simple Java record that keeps track of the total number of messages consumed, and how many landed inside the circle. I also set up a simple Javalin app that publishes the messages via websockets to localhost. To do this, one writes a Kafka consumer for each topic, and use a standard loop. Then I used µPlot to continuously update the current estimate. As always, I put my code on Github . To run the project, first start Kafka (for example with ), and then run to run the project. Open your browser on , and start seeing estimations come in. Below is a screen recording I did. Even though this little project was embarrassingly simple, I learned more than just the Kafka Java API. I decided to also write tests, mostly to increase my own learning. For example, writing the tests for the stream topology (see source here ), made me realize that the topology is completely stateless. It is just a specification of inputs, transformations, and outputs. If I had written the tests earlier, I would have avoided having to restart the Kafka container again and again. I spent an awful lot of time writing the single test I have for the class. It uses Kafka’s class to mock a consumer. The only documentation I could find of this class were some examples on Baeldung’s pages . (in general, I find that documentation for “beginners” is often too basic, outdated, or lacking) The first thing I tried when I started exploring Kafka, was to publish a message on some topic, and subscribe to the same topic in another terminal. About one second later, I got the message. I was a bit surprised that I wouldn’t see the message immediately, but when meditating on this for a little while, I realized that there is a difference between latecy and throughput . There is also tons of configuration that I did not explore. I chose to not take serialization seriously, so I just used Java’s built-in serialization interface (using ). This is of course fine for testing applications, but it turned out to be cumbersome when I changed the classes involved. The solution was usually to delete and restart the Kafka container. In a real application one would rather use a serialization method that don’t crash on schema changes (Protobuf, Avro, JSON, …). Also, one must be mindful about what to do with invalid messages. The default is to crash if any message is invalid. There is even a whole chapter about this in the book Designing Data-Intensive Applications (Chapter 4). Take a look at the code, and if I you found any errors here, don’t hesitate to contact me . A way of generating (pseudo-)random numbers (which any normal computer can do). A way to decide if a given point is inside the circle.

0 views
Fredrik Meyer 1 years ago

Implementing a 2d-tree in Clojure

Recently I followed the very good Coursera course “ Algorithms, Part I ” from Coursera. The exercises were in Java, and the most fun one was implementing a two-dimensional version of a k-d tree . Since I sometimes do generative art in Clojure , I thought this would be a fun algorithm to implement myself. There already exists other implementations, for example this one , but this time I wanted to learn, not use. A 2-d tree is a spatial data structure that is efficient for nearest neighbour and range searches in a two-dimensional coordinate system. It is a generalization of a binary search tree to two dimensions. Recall that in a binary search tree, one builds a tree structure by inserting elements such that the left nodes always are lower, and the right nodes always are higher. In that way, one only needs $O(\log(n))$ lookups to find a given element. See Wikipedia for more details. In a 2-d tree one manages to do the same with points $(x,y)$ by alternating the comparison on the $x$ or $y$ coordinate. For each insertion, one splits the coordinate system in two. Look at the following illustration: This is the resulting tree structure after having inserted the points $(0.5, 0.5)$, $(0.6, 0.3)$, $(0.7, 0.8)$, $(0.4, 0.8)$, and $(0.4, 0.6)$. For each level of the tree, the coordinate to compare with is alternating. The following illustration shows how the tree divides the coordinate system into sub-regions: To illustrate searching, let’s look up $(0.4, 0.6)$. Then we first compare it with $(0.5, 0.5)$. The $x$ coordinate is lower, so we look at the left subtree. Now the $y$ coordinate is lower, so we look at the left subtree again, and we found our point. This is 2 compares instead of the maximum 5. There’s a lot more explanation on Wikipedia . Let’s jump straight to the implementation in Clojure. We first define a node to contain tree values: the point to insert, a boolean indicating if we are comparing vertically or horizontally (vertical means comparing the $x$-coordinate), and a rectangle, indicating which subregion the node corresponds to. (note: we don’t really need to carry around the rectangle information - it can be computed from the boolean and the previous point. I might optimize this later.) Insertion is almost identical to insertion into a binary search tree. Where the structure shines, is when looking for nearest neighbour. The strategy is as follows: keep track of the “best so far” point, and only explore subtrees that are worth exploring. When is a subtree wort exploring? Only when its region is closer to the search point than the current best point: In addition, we do one optimization when there are two subtrees. We explore the closest subtree first. Here’s the full code: In the recursion, we keep a stack of paths (it looks like ). When exploring a new node, we add it to the top of the stack, and when recurring, we pop the current stack. Here’s how the data structure looks after inserting the same points as in the illustration above: I wanted the tree structure to behave like a normal Clojure collection. The way to do this, is to implement the required interfaces. For example, to be able to use , , , , etc, we have to implement the interface. To find out which methods we need to implement, I found this Gist very helpful. I create a new type that I call using : When implementing , I took a lot of inspiration (and implementation) from this blog post by Nathan Wallace. Also, thanks to the Reddit user for pointing out a bad implementation. Here is the diff after his comments. We can create a helper method to create new trees: Now we can create a new tree like this: Also, the following code works: (get all points whose seconds coordinate is greater than 0.5) The full code can be seen here on Github . I did this partly to learn a simple geometric data structure, but also to make another tool in my generative art toolbox . Implementing an algorithm helps immensely when trying to understand it: I got better and better visualizing how these trees looked like. There are two main things about Clojure I want to mention in this section: the library, and the Clojure interfaces. The library is a property based testing library. In a few words, given constraints on inputs, in can generate test data for your function. In one particular case, it helped me verify that my code had a bug and produce a minimal example of this (the bug was that I forgot to recur in the clause in the function). By writing some “simple-ish” code, I got an example of an input that made the return a wrong answer. Here is the code: It is probably more verbose than needed, but the summary is this: the function return a generator , which, given some restraints, can return sample inputs (in this case: vectors of points). Then I compare the result from the tree-search with the brute force result given by first sorting the points, then picking the first point. The way I’ve set it up, whenever I run my tests, generates 10000 test cases and fails the test if my implementation doesn’t return the correct result. This was very handy, and quite easy to set up. It was rewarding to implement the Clojure core interfaces for my type ( , , etc.). What was a bit frustrating though, was the lack of documentation. I ended up reading a lot of Clojure source code to understand the control flows. Basically, the only thing I now about , is that it is a Java interface like this: Then I had to search the Clojure source code to understand how it was supposed to be used. It would be nice with a docstring or two. I found many blog posts that implemented custom types ( this one , this one , or this one ), but very little in Clojure own documentation. On the flip side, I got to read some of the Clojure source code, which was very educational. I also got to understand a bit more the usefulness of protocols (using and to provide several implementations). Here it was very useful to read the source code of thi-ng/geom . I learned a lot, and I got one more tool to make generative art. Perhaps later I could publish the code as a library, but I should really battle test it a bit more first (anyone can copy the code, it is open source on my Github) I used the data structure to create the following pictures (maybe soon I’ll link to my own page instead of Instagram). The function was very useful in making the code fast enough. Se dette innlegget på Instagram Et innlegg delt av Fredrik Meyer (@generert) Until then, thanks for reading this far!

0 views
Fredrik Meyer 1 years ago

Setup of new Macbook

I just got a new Macbook, and I thought it would be useful for my future self to write down what I installed on it. Luckily the history file in my shell is long enough to remember everything. The order of the steps is quite random. I hear there are other alternatives out there, but I stick to Brew for now. I installed Brew the “official” way: This seemed to automatically install the XCode Command Line Tools . Follow the install instructions (this adds a init script to my file): This installs Clojure and OpenJDK 21 . From the official Clojure documentation. I do my Clojure programming in Emacs with Cider and clojure-lsp . I use this version of Emacs on Mac. Install with: The second line makes it possible to open Emacs with Finder. My Emacs configuration is stored here . since I don’t install on a new machine very often, usually I have to restart Emacs a few times before it works. I use Tmux to manage windows in my Terminal. My Tmux configuration is Git managed. Here is the current version: This first requires install the Tmux plugin manager . The package makes copy on select work as expected. I wrote about how I use Tmux here . Mostly by habit I use oh-my-zsh for terminal configuration. I’m mostly happy with the default configuration. I do a lot of frontend development, so I will probably need to install more Node related packages, but I needed Node to let Emacs install LSP clients automatically (many of them are stored on NPM ). I do a lot of my backup using Jottacloud . They have a CLI utility to select directories to backup. From the official documentation : For editor integration, also add the language server : I was looking for a good window manager for Mac. Spectacle is not maintained anymore (I do think it works still), but after some searching, I found Rectangle . Open source and easy to use. I mostly use to maximize windows and / to move windows to the left/right half. I use GNU Stow to manage (some of) my dotfiles. A good intro is here . I keep the dotfiles in a private repository (maybe I make it public one day). I use bat sometimes to read code files with syntax highlighting in the browser. And ripgrep for fast search (also makes some Emacs plugins faster). Remember to update the Git config. At the moment mine looks like this: This blog is built using Jekyll , so it needs Ruby installed. It also uses (at the moment) an old version of Ruby, so I installed Ruby 2.7.3 with a version manager: That seems to be all (for now) that is installed via the CLI. I have usually also always install iTerm2, but I noticed I don’t use many of its features (tabs, themes, etc?), so for now I’m sticking with the builtin Terminal app. It’s easy and it stores all my passwords. For the interruptions. All my files. Sometimes I need a VPN. What’s life without music? (silent) The app allows downloading series. I have the Remarkable tablet , and I often use the app to upload PDF’s. Sometimes I play games. Unfortunately, many games don’t work anymore on Mac - and I might be too lazy to try installing Windows on it.

0 views
Fredrik Meyer 2 years ago

How I use Tmux

When working in the terminal, I like to move efficiently between panes. To do this I use tmux , which is a terminal multiplexer . I had to look that word up, so here is what Wikipedia says: A terminal multiplexer is a software application that can be used to multiplex several separate pseudoterminal-based login sessions inside a single terminal display, (…) A multiplexer is a device that combines several signal sources into a single signal. So in short, tmux lets you combine several terminal sessions into a single one. It also remembers the configuration if you manage to close your terminal, since the server keeps running in the background. Let’s just dive into how I would start a day at work. After typing my terminal looks something like this: Then I press and to split the terminal into two horizontal panes (to split vertically, type and ). I can use and the arrow keys to jump between panes. Now I can run for example in the left pane, a dev server in the bottom right, and a free terminal in the upper right pane. If I think it starts to get too crowded, I can type and and write , to start fresh with zero panes (these are the numbers at the bottom in the pictures). To navigate between windows I would use and (next/previous). I use a very simple configuration file (put it in ): The configuration file does two things: first, by default mouse actions are not supported, so I enable it (which lets me copy text by marking it, for example). The second options make sure that new panes and windows are opened in the same directory as the original pane. There is of course a lot more you can do, but at this point, these are the commands I use the most. For more inspiration (and better guides than this one), see for example thr awesome-tmux repository. This was not meant to be a guide, for that I would recommend the official documentation .

0 views
Fredrik Meyer 2 years ago

Using Emacs to backup a Raspberry Pi

At home, I manage by smart lights (Philis Hue, Ikea Trådløs, etc) from a Raspberry Pi running the Deconz Zigbee gateway. A week ago, I suddenly couldn’t turn off my kitchen lights - and I soon realized it was because the SD card on my Raspberry had stopped working. Resetting smart lights is a hassle, so I was very happy when I discovered that a simple backup script I had set up ages ago still worked (it backed up every night to Google Disk). It turns out the the Deconz software only needs the SQLite database to remember the connection information of the lights, so just copying my backup file to the configuration directory of Deconz was enough to reconnect to the lights (once I managed to actually install the Deconz software, but that is another story…). This time I decided to use Amazon S3 to backup the Deconz database, since I’m more familiar with how S3 works, thanks to work experience. Create a S3 bucket and an IAM user that is allowed to write files to this specific bucket. I use AWS CDK for this, but it should be quite easy to do it via the GUI as well. After the user is created, go to the IAM console, and create access keys. They consist of an ID and a secret part. Install the AWS CLI on your Raspberry Pi. I just did this: Check that it worked by typing e.g. . Add your credentials by typing and follow the instructions. Of course I could have written a simple Bash script, but writing Bash feels like fixing the electrics at home without professions - it might work, or you might shoot yourself in the foot. So I decided to try using Emacs, since I already installed Emacs on the Raspberry (again, because I’m more used to Emacs keybindings than anything else). It required some Googling, but it turned out short and understandable: The script is quite simple. I define the relevant file names, and the the command to run (the variable). Then I run the command. By placing the shebang at the top of the file, and running on it (making it executable), one can run the script just by typing To not have everything in my home directory, I moved the script to and changed its owner to . To make it run periodically, I googled how crontab works, and added this snippet in the crontab file ( ): Now the script will run every night at 03:00 AM. This is not exactly advanced Emacs usage, but I found it fun to be able to use Elisp instead of Bash when I had the chance.

0 views
Fredrik Meyer 3 years ago

Introduction to AWS CloudFormation

In the old days, well before my time as a programmer, you had servers, which maybe had a Java server running, connecting to a Oracle SQL database, managed by some other team. The infrastructure itself was “simple” (a bunch of servers, a firewall and a database). Then came Kubernetes and cloud providers with managed services, making the number of components larger. To be able to manage the infrastructure in a reprocible way, one would want to keep infrastructure configuration in source control. Infrastructure as Code is a way of defining infrastructure with code (obviously). This has the advantage that the infrastructure and its configuration can be put in version control, and often also in a CI pipeline. This in turn ensures that whatever is on the Git branch represents the true state of things. In this post I will explain AWS’ CloudFormation . The most popular Infrastructure as Code tool is probably Terraform , which looks like below: This code defines a S3 bucket with an ID and a bucket name . The language of Terraform is the HashiCorp Configuration Language (HCL). In the AWS world, the two most popular infrastructure as code tools are Terraform and CloudFormation. CloudFormation is natively supported within AWS, and is written in YAML (for good and for worse…). This is how CloudFormation looks: Resources are roughly speaking things you can create in AWS: this includes things like Lambda Functions, S3 bucket, IAM roles/permissions so on. They can be referenced by other resources, in order to create a resource graph (this resource refers to that resource, and so on). A single such YAML file defines a stack , which is a collection of resources in the same “namespace”. To make things clearer, let us go through a simple example. Let us first create a bucket using the above snippet. Save it in a file called . Then run the following command with the AWS CLI: You will see something like this: If this completes successfully, it will create a S3 bucket with the name . Let us add a Lambda Function, by adding this to the YAML file: Phew! That was a mouthful. The first block defines a Lambda function with the name , which only logs the sentence “I’m a lambda!”. The next field, , just defines what the Lambda function is allowed to do. This one is only allowed to log to CloudWatch. Let’s do something slightly more complex. Let the Lambda function be able to list all objects in the bucket. Change the Lambda code to the following: And add the following policy in the IAM Role: Now the lambda will be allowed to list objects in the S3 bucket. For the full template, see this gist . To deploy this, run the same command as above, and wait for it to finish. To test the Lambda function, go to the Lambda console, and choose “Test”. If the function doesn’t crash, you will see a green box with its output. These steps should give the reader an idea of what CloudFormation is and how it works. The documentation is very good and has a lot of examples, so I recommend always starting there when wondering about all the possible properties. As you can see, CloudFormation has a tendency to become quite complex. To keep things under control, a good first step is to use something like , which can validate your CloudFormation and warn you about problems before you deploy. A good next step would be to instead use something like AWS CDK (Cloud Development Kit). This is a framework which takes your favorite programming language and compiles it to CloudFormation. If you’re going to write a lot of infrastructure as code in AWS, CDK is definitely the way to go. (but that will be another blog post) (to clean up, the easiest is to go to the AWS Console, then CloudFormation, then “Delete Stack”. Note that some resources are just “orphaned”, not deleted. This includes DynamodDB tables and S3 buckets, which must be manually deleted after.)

0 views
Fredrik Meyer 4 years ago

Filtering JSON logs with JQ

One of my favorite command line tools is ( homepage ). It is often very useful to look at structured JSON data. My most common usage of is very simple. Whenever some command returns a long JSON, it is very handy to pipe the result to . Like this: Then I get a syntax highligted response that is easy to read (in this case the response was already formatted, but sometimes a JSON response is unformatted). Another use-case is for looking through logs. As a first step, list all log groups in your AWS account: A little explanation is in order: the format of the response is a JSON like this: The first part ( ) chooses the content of the key. The seconds unpacks the array and prints the items after each other. The final picks out the name of the log group. To pretty-print JSON logs from AWS Lambda functions, one can run this little beast: The first thing we do is to get all log messages in the last day from our log group. Since all log messages are prepended with the timestamp, we cut out the timestamp with (thanks, Google!). Then we parse all JSON logs and ignore the rest ( thank you Stackoverflow ). If we want to see all error logs, we can just change the above command to also supports sorting and lots of much more advanced manipulation of JSON input. Take a look at the manual for more.

0 views
Fredrik Meyer 4 years ago

How to connect to localhost from a Docker container

At work I recently spent a lot of time trying to get a running Docker container to connect to a service running on . The setup is this: we have a Docker container that runs inside a Docker bridge network together with other Docker containers it can communicate with. On the host machine there is a service which listens on . For the purposes of this blog post, let’s just assume this service’s job is to stream messages to some server. I wanted one of the containers to connect to the service running on . In this post I will explain how I got it to work (only on Linux). The picture is something like this: Docker containers inside the same bridge network can easily communicate with each other by using their container ID as the host name. However, Docker Container A cannot easily talk to something on on the host machine. There is even someone on StackOverflow saying that this is not possible! Without network: host you can’t go back from a container to the host. Only host to container. This is the main ideology behind containers. They are isolated for both stability and security reasons. This is a truth with modifications. The way Docker manages its networking is by suitably modifying . Which means that we also can do the same! We are going to use to route traffic to the bridge gateway inside a container to traffic on on the host machine (the next steps are thanks to this very helpful StackOverflow answer). The IP of the bridge gateway can be found by running . To not have to worry about the actual IP, you can start the container with . This will map the host name to the IP of the bridge gateway. First we have to find the name of the Docker bridge network. This is not the same as the ID you see when you do , but almost. If you use the default bridge network, then the name should be . If you made your own network, the name will be (see the original code ). You can also get the name of the bridge by running . You can set the bridge name yourself when you create the network with the following command ( source ): (just remember that it can be at most 15 characters long) We need to allow routing from the bridge to localhost: Note that this has some mild security implications . The next step is to use to forward traffic to the bridge gateway to localhost. Once this is done, you can connect to the service on on the host machine by calling . Happy coding!

0 views