Latest Posts (16 found)
alikhil 2 weeks ago

The Simple Habit That Saves My Evenings

As a software engineer, I often work on big tasks that require hours of continuous and focused work. However, we have plenty of meetings, colleagues asking us something in Slack, and lunch breaks. Add a colleague who comes to you and calls you for a cup of coffee if you work from the office. And usually, we don’t really have such a luxury as hours of uninterrupted time. Nevertheless, sometimes we catch the flow of productive and focused work at the end of the workday. Imagine you come up with an elegant solution to a problem you’ve been tackling all day, or maybe even the whole past week. You can’t wait to implement and test your solution. And of course, you are so driven by your idea that you decide to continue working despite your working day being over. “20 minutes more and I will finish it,” you think. Obviously, this is not the case; some edge cases and new issues will inevitably arise. You come to your senses only 2–3 hours later—tired, hungry, demotivated, and still struggling with your problem. You just wasted your evening, with nothing to show for it. Worse, you overworked and didn’t recover that night. Thus, you were already exhausted when you started working. You can imagine what will follow next. Nothing good, really. I remember this happening back when I worked at a fast-growing startup, KazanExpress, while living in Innopolis . Our office was buzzing with energy, and the pace was intense—we often pushed ourselves late into the night. One evening, I felt like I had finally cracked a tricky part of our infrastructure. I thought, “Just 20 more minutes and I’ll finish it.” Of course, those 20 minutes stretched into well over three hours. By the time I left the office, I was tired, hungry, and frustrated—without any real progress to show. The next morning, walking back to the office, I realized how drained I already felt before even starting work. That was when it became clear to me: it’s better to stop, write down the next steps, and come back with a fresh head. Of course, some might argue, “But you are considering only a negative scenario; one could really finish that job in 20 minutes and go home happy…”. Sure, but I think this risk is not worth it. Instead, I would suggest doing another thing. Rather than trying to complete your task in 20 minutes, take this time to write down your thoughts, and a step-by-step action plan of what you think you need to do to finish your task. Then go home. Rest. A feeling of incompleteness will motivate you to come back and finalize your work the next day. Only you will be full of energy, together with a settled plan. No doubt you’ll accomplish your task before lunch. Writing down the next steps helps to clear your mind after a workday. You write and forget about your work until the next morning. As a bonus, there is a chance that new, better ideas will come while you sleep or rest. I have been using this trick for more than 5 years now, and it helps me to keep my work and life balanced. Here are the two main ideas of it: Don’t overwork Write down the next steps before finishing your workday

0 views
alikhil 2 months ago

OAuth2-proxy: protect services in kubernetes

The original post wrote about oauth2-proxy over seven years ago was quite popular at the time and attracted a lot of organic traffic to my blog, which still benefits my SEO today. Since the tutorial had become outdated, I decided to rewrite it. We have a Kubernetes cluster with several web services deployed for internal use. We want to expose our internal web services to the Internet, but restrict access by requiring authorization. Access should be granted only to users authenticated through our Identity Provider (such as Google, GitHub, Keycloak, etc.). For simplicity, let’s assume that both ingress-nginx and cert-manager are already deployed in the cluster. I will use Pocket ID as Identity Provider in this tutorial. Configuration slightly differs for different providers. Check the official documentation for your provider. For the examples in this guide, I’ll use my domain: – - will be used for Pocket ID - will be used for oauth2-proxy. I recommend to have higher domain for oauth2-proxy service for easier cookie setup. - reserved for services deployed for internal usage I have added two DNS records: Go to https://pocket-id.k8s.alikhil.dev/signup/setup and set initial configuration for Pocket ID. Then add your passkey. Create developers group and add yourself to the list of members. After that, go to OIDC clients page and create one for oauth2-proxy. Set proper callback url. Save generated Client ID and Client Secret for later use. I am using raw k8s secrets in this tutorial, but I highly recommend storing secrets in Vault or similar services and use External Secretes Operator to deliver them to kubernetes. To check oauth2-proxy we need a dummy service. I will use whoami helm chart for this. Go to whoami url and check if oauth2-proxy redirects you to Pocket ID like in the demo: Later, when you need to protect any service in Kubernetes with oauth2-proxy, you simply need to add two annotations to your Ingress resource: - will be used for oauth2-proxy. I recommend to have higher domain for oauth2-proxy service for easier cookie setup. - reserved for services deployed for internal usage record for k8s.alikhil.dev pointing to ingress-nginx IP address in the cluster ( ) record for pointing to

0 views
alikhil 3 months ago

Note on cable management for standing desk

Hey! Eventually, my desk got pretty messy with all the cables and other stuff piled up on it. Also, my cats started playing with and gnawing on wires. So, I decided to reorganize everything on my desk. Check it out! I’ve got a medium-sized table, about 120 by 80 cm. It’s wide (deep) but not too long. I need to keep everything in order and clean so I can work comfortably. Here’s what my setup looked like before I made all the changes I mentioned. As you can see, I’ve got a work laptop, a Mac mini, a monitor on its own stand, a journal, AirPods, and a few other things. I also bought a USB switch at some point because I got tired of plugging and unplugging my webcam manually in my KVM-less setup . I mounted the monitor to the arm, which freed up space below the screen. I set up my Mac mini under the table. I’ve got my MacBook pro mounted under the table. I installed the USB hub into the monitor arm. It’s a solid connection point, but the switch button on it is a bit uncomfortable to press. So, Put the extended switch button under the table, close to the chair. There’s a mounted headphones holder under the table. The cables that ran next to each other were routed through the sleeves. Bind cables together with hook and loops tie The cable sleeve is super handy for binding the cables together! This cable sleeve is a bit pricey, but it lets you pull cables out from the middle without damaging it. MacBook under desk mount support . Antiscratch stickers came loose at the second day. So don’t recommend this one. Mac Mini under desk support Hook and loops tie Cable clips Adjustable monitor support arm

0 views
alikhil 3 months ago

How to Use 3 Computers with One Monitor, Keyboard and Mouse – Without a KVM Switch

Hi there! Today, I want to share how I organize my three-computer setup (MacBook Air, Mac mini, and Raspberry Pi) without a KVM switch, using a single keyboard, mouse, and monitor. As you can see, all peripheral devices could connect to all my computers. At least in theory. Historically my devices were numbered as follows: The keyboard connection scheme is straightforward: uConsole connects via Bluetooth to the first device, the MacBook Air, as the second device, and the Mac Mini as the third device. Later, I can use the hotkey to switch the keyboard between devices. The first connection of the Ugreen mouse is reserved for the 2.4 GHz USB dongle. I plugged it into my uConsole, and then my MacBook Air and Mac mini connected as the second and third devices, respectively. To switch devices with the mouse, press the red button on the bottom. The MacBook Air is connected to the USB-C port, which charges the computer. The Mac Mini connects via HDMI-1, and the uConsole connects via HDMI-2. The easiest way to switch devices on the monitor is to press the small, round button in the bottom right corner to open the settings menu. Then, choose the desired port. However, “easiest” does not mean “most comfortable” . Changing the monitor’s input source this way requires pressing these small buttons several times. You also need to keep track of which devices are connected to HDMI-1 and HDMI-2. Instead of pressing buttons on the monitor and navigating its menu, I could press a hotkey on my keyboard to trigger the monitor to change input sources via its API (Display Data Channel/Command Interface Standard (DDC/CI).). I have explained in detail how one could do it on a Mac in my another post - Monitor input source control on Mac . I won’t go into detail here. I’ll just say that on my Mac Mini and MacBook Air, I have configured the following: Mac Mini (#3) Macbook Air (#2) For uConsole Linux machine (#1) I used ddcutil and configured the following hotkeys: Once all the devices are connected and the hotkeys are configured, the following sequence of actions is needed to switch to device (where N is 1, 2, or 3): For example, let’s assume I am currently using my third device – Mac Mini, to switch to Macbook Air (#2) I do: Here is a demo video: It has some limitations, but if you have a keyboard and mouse that support multiple connected devices, as well as a monitor, you won’t need a special KVM switch to switch between computers. Thank you for reading this! Macbook Air M2 14" – work Laptop. Mac Mini M1 – my personal computer. ClockworkPi uConsole , based on RPI CM4 is a portable device for Linux tinkering. The UGREEN Vertical Wireless - can connect up to three devices: Two via Bluetooth and one via a 2.4GHz USB dongle. The Keychrone K15 Max allows me to connect up to three devices via Bluetooth. The Monitor Dell 27 4K UHD con USB-C (S2722QC) with two HDMI inputs and one USB-C port which allows me to connect it up to three devices. Macbook Air On triggers switch to uConsole On triggers switch to Macbook Air On triggers switch to uConsole On triggers switch to Mac mini triggers switch to Macbook Air triggers switch to Mac Mini Switch the monitor input source by pressing . Switch the keyboard device by pressing . Switch the mouse by pressing red button on the bottom of it until the indicator is under is blinking. Press red button on the bottom of the mouse until indicator under 2 is on. I don’t need a special KVM switch to switch between devices. I use my keyboard to switch (for two-thirds of the KVM). Switching the mouse takes longer since there is only one way iteration over the connected devices. Three actions are needed to switch all peripheral devices, as opposed to one action with a KVM. I still need to manually reconnect other peripheral devices, e.g. web camera.

0 views
alikhil 4 months ago

Why Graceful Shutdown Matters in Kubernetes

Have you ever deployed a new version of your app in Kubernetes and noticed errors briefly spiking during rollout? Many teams do not even realize this is happening, especially if they are not closely monitoring their error rates during deployments. There is a common misconception in the Kubernetes world that bothers me. The official Kubernetes documentation and most guides claim that “if you want zero downtime upgrades, just use rolling update mode on deployments”. I have learned the hard way that this simply it is not true - rolling updates alone are NOT enough for true zero-downtime deployments. And it is not just about deployments. Your pods can be terminated for many other reasons: scaling events, node maintenance, preemption, resource constraints, and more. Without proper graceful shutdown handling, any of these events can lead to dropped requests and frustrated users. In this post, I will share what I have learned about implementing proper graceful shutdown in Kubernetes. I will show you exactly what happens behind the scenes, provide working code examples, and back everything with real test results that clearly demonstrate the difference. If you are running services on Kubernetes, you have probably noticed that even with rolling updates (where Kubernetes gradually replaces pods), you might still see errors during deployment. This is especially annoying when you are trying to maintain “zero-downtime” systems. When Kubernetes needs to terminate a pod (for any reason), it follows this sequence: The problem? Most applications do not properly handle that SIGTERM signal. They just die immediately, dropping any in-flight requests. In the real world, while most API requests complete in 100-300ms, there are often those long-running operations that take 5-15 seconds or more. Think about processing uploads, generating reports, or running complex database queries. When these longer operations get cut off, that’s when users really feel the pain. Rolling updates are just one scenario where your pods might be terminated. Here are other common situations that can lead to pod terminations: Horizontal Pod Autoscaler Events : When HPA scales down during low-traffic periods, some pods get terminated. Resource Pressure : If your nodes are under resource pressure, the Kubernetes scheduler might decide to evict certain pods. Node Maintenance : During cluster upgrades, node draining causes many pods to be evicted. Spot/Preemptible Instances : If you are using cost-saving node types like spot instances, these can be reclaimed with minimal notice. All these scenarios follow the same termination process, so implementing proper graceful shutdown handling protects you from errors in all of these cases - not just during upgrades. Instead of just talking about theory, I built a small lab to demonstrate the difference between proper and improper shutdown handling. I created two nearly identical Go services: Both services: I specifically chose a 4-second processing time to make the problem obvious. While this might seem long compared to typical 100-300ms API calls, it perfectly simulates those problematic long-running operations that occur in real-world applications. The only difference between the services is how they respond to termination signals. To test them, I wrote a simple k6 script that hammers both services with requests while triggering rolling restart of service’s deployment. Here is what happened: The results speak for themselves. The basic service dropped 14 requests during the update (that is 2% of all traffic), while the graceful service handled everything perfectly without a single error. You might think “2% it is not that bad” — but if you are doing several deployments per day and have thousands of users, that adds up to a lot of errors. Plus, in my experience, these errors tend to happen at the worst possible times. After digging into this problem and testing different solutions, I have put together a simple recipe for proper graceful shutdown. While my examples are in Go, the fundamental principles apply to any language or framework you are using. Here are the key ingredients: First, your app needs to catch that SIGTERM signal instead of ignoring it: This part is easy - you are just telling your app to wake up when Kubernetes asks it to shut down. You need to know when it is safe to shut down, so keep track of ongoing requests: This counter lets you check if there are still requests being processed before shutting down. it is especially important for those long-running operations that users have already waited several seconds for - the last thing they want is to see an error right before completion! Here is a commonly overlooked trick - you need different health check endpoints for liveness and readiness: This separation is crucial. The readiness probe tells Kubernetes to stop sending new traffic, while the liveness probe says “do not kill me yet, I’m still working!” Now for the most important part - the shutdown sequence: I’ve found this sequence to be optimal. First, we mark ourselves as “not ready” but keep running. We pause to give Kubernetes time to notice and update its routing. Then we patiently wait until all in-flight requests finish before actually shutting down the server. Do not forget to adjust your Kubernetes configuration: This tells Kubernetes to wait up to 30 seconds for your app to finish processing requests before forcefully terminating it. If you are in a hurry, here are the key takeaways: Catch SIGTERM Signals : Do not let your app be surprised when Kubernetes wants it to shut down. Track In-Flight Requests : Know when it is safe to exit by counting active requests. Split Your Health Checks : Use separate endpoints for liveness (am I running?) and readiness (can I take traffic?). Fail Readiness First : As soon as shutdown begins, start returning “not ready” on your readiness endpoint. Wait for Requests : Do not just shut down - wait for all active requests to complete first. Use Built-In Shutdown : Most modern web frameworks have graceful shutdown options; use them! Configure Terminaton Grace Period : Give your pods enough time to complete the shutdown sequence. Test Under Load : You will not catch these issues in simple tests - you need realistic traffic patterns. You might be wondering if adding all this extra code is really worth it. After all, we’re only talking about a 2% error rate during pod termination events. From my experience working with high-traffic services, I would say absolutely yes - for three reasons: User Experience : Even small error rates look bad to users. Nobody wants to see “Something went wrong” messages, especially after waiting 10+ seconds for a long-running operation to complete. Cascading Failures : Those errors can cascade through your system, especially if services depend on each other. Long-running requests often touch multiple critical systems. Deployment Confidence : With proper graceful shutdown, you can deploy more frequently without worrying about causing problems. The good news is that once you have implemented this pattern once, it is easy to reuse across your services. You can even create a small library or template for your organization. In production environments where I have implemented these patterns, we have gone from seeing a spike of errors with every deployment to deploying multiple times per day with zero impact on users. that is a win in my book! If you want to dive deeper into this topic, I recommend checking out the article Graceful shutdown and zero downtime deployments in Kubernetes from learnk8s.io. It provides additional technical details about graceful shutdown in Kubernetes, though it does not emphasize the critical role of readiness probes in properly implementing the pattern as we have discussed here. For those interested in seeing the actual code I used in my testing lab, I’ve published it on GitHub with instructions for running the demo yourself. Have you implemented graceful shutdown in your services? Did you encounter any other edge cases I didn’t cover? Let me know in the comments how this pattern has worked for you! Sends a SIGTERM signal to your container Waits for a grace period (30 seconds by default) If the container does not exit after the grace period, it gets brutal and sends a SIGKILL signal Horizontal Pod Autoscaler Events : When HPA scales down during low-traffic periods, some pods get terminated. Resource Pressure : If your nodes are under resource pressure, the Kubernetes scheduler might decide to evict certain pods. Node Maintenance : During cluster upgrades, node draining causes many pods to be evicted. Spot/Preemptible Instances : If you are using cost-saving node types like spot instances, these can be reclaimed with minimal notice. Basic Service : A standard HTTP server with no special shutdown handling Graceful Service : The same service but with proper SIGTERM handling Process requests that take about 4 seconds to complete (intentionally configured for easier demonstration) Run in the same Kubernetes cluster with identical configurations Serve the same endpoints Catch SIGTERM Signals : Do not let your app be surprised when Kubernetes wants it to shut down. Track In-Flight Requests : Know when it is safe to exit by counting active requests. Split Your Health Checks : Use separate endpoints for liveness (am I running?) and readiness (can I take traffic?). Fail Readiness First : As soon as shutdown begins, start returning “not ready” on your readiness endpoint. Wait for Requests : Do not just shut down - wait for all active requests to complete first. Use Built-In Shutdown : Most modern web frameworks have graceful shutdown options; use them! Configure Terminaton Grace Period : Give your pods enough time to complete the shutdown sequence. Test Under Load : You will not catch these issues in simple tests - you need realistic traffic patterns. User Experience : Even small error rates look bad to users. Nobody wants to see “Something went wrong” messages, especially after waiting 10+ seconds for a long-running operation to complete. Cascading Failures : Those errors can cascade through your system, especially if services depend on each other. Long-running requests often touch multiple critical systems. Deployment Confidence : With proper graceful shutdown, you can deploy more frequently without worrying about causing problems.

0 views
alikhil 6 months ago

Remote LAN access with WireGuard and Mikrotik

Recently I have configured out how to access my home and cloud network remotely using WireGuard and Mikrotik Hex S router. With this step-by-step tutorial I will show you (and perhaps my future self) how to do it. A Mikrotik Hex S router with a dynamic public IP address. Services in a cloud VM (Ubuntu 22) with a static public IP address. Services in a VM on my home network. Clients - laptops and phones - that need to access the services in my home and cloud network. Clients outside of my home network should be able to access services both on my home and cloud network. Only traffic to my home and cloud network should be routed through the VPN. Clients inside my home network should be able to access services on my cloud network without additional configuration. No external centralized service should be used. No open ports on my home router. Nowadays, there are plenty of VPN solutions like zero-tier and tailscale. However, I think they are too bloated for my humble needs and WireGuard is more than enough for that. Because of last requirement, it’s obvious that traffic to home network should be routed though my cloud server. So, I will use WireGuard to create a tunnel between mikrotik router and cloud server. This way, I can access my home network from anywhere without exposing any ports on my home router. For the sake of reproducibility and simplicity I will use vanilla wireguard and configure it on OS level, not in docker container. I will use subnet for Wireguard network. It should show the interface is up like this More commands to check the status of the service: I will use Mikrotik command line interface (CLI) to configure the router. You can use Winbox or WebFig if you prefer. Here we add cloud vm as a peer to the Mikrotik router’s wireguard. The public key of the cloud server is needed here. You should see the Mikrotik peer in the list of peers. I recommend you to use your smartphone as first client device, because it can work from both home WiFi and mobile data. This way you can test the connection from both networks. Also, install on your smartphone: Install Wireguard app for your client OS. Then, generate public and private keys on the device, for that create config from scratch in the app and then click on Generate keypair button. Or you can generate keys on the cloud server and then copy them to the client device. On the cloud server, edit the WireGuard config file and add the client as a peer. Each time increment previous peer address by 1. You already have public and private keys for the client device, other configuration parameters are: Here is an example of the configuration file for the client device: That’s it! Disconnect your device from the home Wi-Fi, switch to mobile data and connect to the VPN. Then try to: Since I have 2 adguard instances and I use them as DNS servers everywhere, I will add DNS records for accessing my services: Thus, I can access my services using domain names instead of IP addresses. I hope this tutorial was helpful for you. I will keep it updated if I find any issues or improvements. If you have any questions or suggestions, feel free to leave a comment below. Credits to @laroberto for their guide on LAN access with WireGuard . I followed it to set up the initial configuration and then adapted it to my needs. A Mikrotik Hex S router with a dynamic public IP address. Services in a cloud VM (Ubuntu 22) with a static public IP address. Services in a VM on my home network. Clients - laptops and phones - that need to access the services in my home and cloud network. Clients outside of my home network should be able to access services both on my home and cloud network. Only traffic to my home and cloud network should be routed through the VPN. Clients inside my home network should be able to access services on my cloud network without additional configuration. No external centralized service should be used. No open ports on my home router. WireGuard app ( iOS , Android ) Network debugging app ( iOS , Android ) Address - It’s address of peer in wireguard subnet. Put the same address you set in field in previous step. DNS servers - if you have adguard/pihole running on the cloud server, you can use it as a DNS server. Put it’s IP address here. MTU and ListenPort - you can leave them empty, they will be set automatically. Endpoint - cloud server IP address and port (27277) Public key - public key of the cloud server AllowedIPs - here we put all subnets that we want to access from this current client device ping the cloud server and Mikrotik router IP addresses in wireguard subnet. check ports of services in docker containers on cloud and home server VM. * .cloud.domain.com - pointing to traefik docker container on cloud server * .home.domain.com - pointing to traefik docker container on home server

0 views
alikhil 6 months ago

Storing and using secrets in Mikrotik RouterOS

Recently I have replaced my stock ISP router with Mikrotik Hex S . I have been using it for a while and I am very happy with it. It is a very powerful device which can be programmed and automated with built-in scripting language. When I started writing my first scripts I faced a problem: how to store and use secrets in my scripts. I have found a solution and I want to share it with you. Let’s say I want to write a script that will send me telegram notifications. To do so I need to store my telegram bot token and chat id. Since I keep my RouterOS configuration in a git repository, I don’t want to hardcode my secrets in the script. RouterOS has low level feature which can be used to store secrets. However, it could be inconvenient and a bit messy to use it directly in scripts. Instead, I would like to have more high level API to store and use secrets. And, I have one. There is post in mikrotik forum by user with nickname Amm0 . Ammo has shared a script of global function which can be used to store and retrieve secrets like this: Now, I modify my script to use this function to keep it clean and secure: The good thing about this approach is that secrets storing and retrieving mechanism encapsulated and can be easily changed in the future without changing the scripts. Also, it is easy to use and understand. Keep your secrets safe and happy scripting!

0 views
alikhil 6 months ago

Unintended Side Effects of Using http.DefaultClient in Go

The Internet is plenty of articles that telling why you should not be using in Golang ( one , two ) but they refer to and settings. Today I want to share with you another reason why you should avoid using in your code. As an SRE at Criteo, I both read and write code. Last week, I worked on patching Updatecli — an upgrade automation tool written in Go. The patch itself was just ~15 lines of code. But then I spent three days debugging a strange authorization bug in an unrelated part of the code. It happened because of code like this: Since is a reference, not a value: The code above is effectively the same as: Later, in a third-party library, I found this: To prevent this, I had to change the code to: As a result, the patched client with the authorization transport got injected into the third-party library, causing unexpected failures. Bugs like this are hard to catch just by reading the code, since they involve global state mutation. But could they be detected by linters? What do you think? How do you find or prevent such issues in your projects?

0 views
alikhil 11 months ago

Forwarding SMS to Telegram

After extensive travel, I’ve accumulated several mobile numbers and, naturally, physical SIM cards. Switching them out each time became tedious, even after buying a basic Nokia with two SIM slots, which only helped temporarily. When a friend asked if I could set up a Spanish number for account registrations, I realized it was time to automate the process. If you’re dealing with multiple SIM cards and want to receive SMS in Telegram, I have a straightforward approach. You’ll need a Linux machine that’s always online, connected to the internet, and about $10. If you have a USB modem at home, check if it’s supported by Gammu . For our purposes, we don’t need an expensive 4G modem with advanced features. Any basic 2G/3G modem will work, and these are easy to find at a discounted price on sites like eBay or Wallapop. Search for “Huawei USB modem,” sort by price, and look for unlocked options or ones with compatible firmware. For instance: Next, go to the Gammu website and look up the device. Make sure it appears on the list and that “SMS” is included in the “Supported features” column: If the device meets these requirements, it’s good to go! Before starting the setup, it’s best to connect the modem with the SIM card already inserted to your PC and check that it’s functioning properly. Run the following command to identify the device path: You should see a paths similar to: Choose a path that ends with , in my case it’s . Using Docker Compose, set up your configuration: Save the configuration to a file and run: If everything is set up correctly, you should see the following log messages: To test SMS reception, you can use free online SMS-sending services (search for “send SMS for free”) or try logging into Telegram, your bank account, etc. The Gammu library provides a unified interface for working with phones and modems from various manufacturers. On top of that, there’s the Gammu SMS Daemon , which receives SMS messages and triggers a custom script—in our case, script to send the messages to Telegram. Thanks to @kutovoys for the idea and Docker image ! This is a simple, affordable, and scalable solution—especially if you’re into self-hosting. This post was originally written for vas3k.club . A physical SIM card A USB modem that’s supported by the Gammu library A Telegram bot token, chat or channel ID A Linux machine with a free USB port Docker and Docker Compose installed

0 views
alikhil 12 months ago

Monitor input source control on Mac

If you as me have single monitor and 2 Mac devices (for example, I have corporate Macbook and personal Mac Mini) you may want to use the same monitor for both devices. And you may want to switch between them without unplugging and plugging cables or selecting input source using monitor buttons. In this post I will show you how to configure hotkeys for that. You will need a monitor with multiple input sources . For example, I have Dell S2722QC tt has 2 HDMI ports and 1 USB-C port where: There is app called BetterDisplay that has a lot of powerful features. But for our case we need only one feature - change display inputs using DDC . Install it on both Macs. You will have 14 days trial period with all PRO features. Enable Accessibility for BetterDisplay in System Settings -> Privacy & Security -> Accessibility. Then try to switch input source by clicking on BetterDisplay icon in the menu bar -> DDC Input Source -> Select next port. If it works, you can continue to the next step. Otherwise check if your monitor supports DDC protocol and ensure Accessibility is enabled for BetterDisplay. If you are ready to pay 19$/19€ x2 for both Macs you can buy BetterDisplay . And then configure hotkeys in the app settings Settings -> Keyboards -> Custom keyboard shortcuts -> DDC Input Source . Click “Record Shortcut” and press the key combination you want to use, for example and . If you like me don’t want to pay for 40$ for single feature there is a hacky way to do it. We need an app that can handle hotkeys and run shell commands. I use Raycast , so called “Spotlight on steroids” and it can handle custom hotkeys. Or you can use any other app you like. Before configuring Raycast we need to know value for each input source. To do so, go to Settings -> Displays -> “Your monitor name” -> DDC Input Sources , and save IDs from Value column for each input source: In my case it’s for HDMI-2 and for USB-C-1. Then create a directory and put there a script : After that try to run it in terminal: If it works, you can continue to the next step. And that’s it! Now you can switch input source using hotkeys. Macbook Air connected to port HDMI-2 Mac Mini connected to port USB-C-1 Open Raycast and go to Settings -> Extensions -> Search for Scripts Click on Add Script Directory and select Click on Record Shortcut for newly added script and press the key combination you want to use, for example on first Mac and on second.

0 views
alikhil 6 years ago

Build own drone.io docker image

¡Hola, amigos! In this post, I will quickly descibe how you can build your own drone.io docker image. Drone is very popular container native CI/CD platform. Not long time ago, there was release of new 1.0 version of drone. Which brang a lot of cool features and new license . The license tells that we can use Enterprise version of drone for free without any limits by building our own docker image if we are individuals or startup (read the licence for more detail). So, how to build it? First, clone the drone repo to your local machine. Second, checkout to version of drone you want to build. For example, I want to build v1.3.1 : We will use single dockerfile to build the image. To do so, we need to add extra step to existing dockerfile which is in directory. Let’s say we want to build docker image for OS and architecture, then we will edit . If you check the dockerfile you will see, that binaries are just copied into docker image during the build and they are built outside of the docker build. So, the step we will add to dockerfile is step. To build the binary, we need to know what version of go is used for building binary in original docker image. We can find it in build step. For version 1.3.1 of drone docker image is used for building binaries. Then, we use same image to build binary in our dockerfile: docker/Dockerfile.server.linux.amd64 Also we need to delete .dockerignore file from root of the repo. Then we build docker image like: That’s all! Now you can use own newly built docker image instead of official one if your use case meet license conditions.

0 views
alikhil 6 years ago

Deploy SPA application to Kubernetes

Hello, folks! Today I want you to share with you tutorial on how to deploy your SPA application to Kubernetes. Tutorial is oriented for those don’t very familiar with docker and k8s but want their single page application run in k8s. I expect that you have docker installed in your machine. If it isn’t you can install it by following official installation guide . As SPA project I will use vue-realworld-example-app as SPA project. You can your own SPA project if you have one. So, I have cloned it, installed dependencies and built: Next step is to decide how our application will be served. There are bunch of possible solutions but I decided to use nginx since it recommends itself as one of the best http servers. To serve SPA we need to return all requested files if they exist or otherwise fallback to index.html. To do so I wrote the following nginx config: Full config file can be found in my fork of the repo Then, we need to write Dockerfile for building image with our application. Here it is: We assume that artifacts of build placed in the directory and so that during the docker build the content of directory copied into containers directory. Now, we are ready to build it: And run it: Then if we open http://localhost:8080 we will see something similar to: Cool! It works! We will need to use our newly builded docker image to deploy to k8s. So, we need to make it available from the k8s cluster by pulling to some docker registry. I will push image to DockerHub : To run the application in k8s we will use resource type. Here it is: deployment.yaml Then we create deployment by running and newly created pods can found: Then we need to expose our app to the world. It can be done by using service of type NodePort or via Ingress. We will do it with Ingress. For that we will need service: service.yaml And ingress itself: ingress.yaml And here it is! Our SPA runs in the k8s!

0 views
alikhil 6 years ago

Tips for language learners

If you expected only post on IT topics on this blog, I am sorry :( Today I’ll share you my experience in learning a new language. I am practicing these techniques and tips for mastering Español, but I am sure that you can apply them to most of the other languages. If you are only starting to learn then this will be helpful for you in double. Most of the people give up learning after several weeks or even days after they begin. It’s because of their motivation. It becomes lower with time. And in the beginning when you know almost nothing and understand that you should work hard and work a lot to learn. It really demotivates. And I think elements of gamification will help to increase motivation and turn learning process to habit. You can gamify your learning in any way you want. I recommend: As soon as you have learned basics, start listening to podcasts. There are some free audio podcasts oriented for beginners. For example, in Spanish, it’s a Notes in Spanish . Create a playlist with up to 20 songs you like on language you learn and listen to them regularly. And keep listening until you will understand what a song about. You don’t have to translate all the words in each song. In fact, It is not very useful. But opposite, when you learn new words from textbook, tinycard or from anywhere else and you hear this word while you are listening to your playlist you learn better. – Oh, wait. I know what this word means … If you are fun of TV-shows, you can start watching them in the language you learn. With subtitles or without them. Of course, it requires some basic knowledge. If you care that you will miss some key points of the story and prefer to fully understand it, you can watch sitcoms which don’t have almost anything in common between series. Duolingo or Linguoleo - good for learning the basics of the language Tinycard - for learning/memorizing new words

0 views
alikhil 7 years ago

Oauth2 Proxy for Kubernetes Services

Hello, folks! In this post, I will go through configuring Bitly OAuth2 proxy in a kubernetes cluster. There is a fresh tutorial about oauth2-proxy A few days ago I was configuring SSO for our internal dev-services in KE Technologies . And I spent the whole to make it work properly, and at the end I decided that I will share my experience by writing this post, hoping that it will help others(and possibly me in the future) to go through this process. We have internal services in our k8s cluster that we want to be accessible for developers. It can be kubernetes-dashboard or kibana or anything else. Before that we used Basic Auth, it’s easy to setup in ingresses. But this approach has several disadvantages: What we want is that developer will log in once and will have access to all other services without additional authentication. So, a possible scenario could be: Using kube-lego for configuring Let’s Encrypt certificates is depricated now. Consider using cert-manager instead. Initialy, when I was writing this post I was using old version of nginx 0.9.0, because it did not work correctly on newer version. Now, I found the problem and it have been fixed in 0.18.0 release. But ingress exposing private services should be updated( more details ): First of all, we need a Kubernetes cluster. I will use the newly created cluster in Google Cloud Platform with version 1.8.10-gke.0 . If you have a cluster with configured ingress and https you can skip this step. Then we need to install nginx ingress and kube lego . Let’s do it using helm: without RBAC: After it’s installed we can retrieve controller IP address: and create DNS record to point our domain and subdomains to this IP address. Let’s run simple HTTP server as service and expose it using nginx ingress: example-ing.yaml Wait for a few seconds and open https://service.example.com and you should see something similar to this: In this post, we will use GitHub accounts for authentication. So, go to https://github.com/settings/applications/new and create new OAuth application Fill Authorization callback URL field with https://auth.example.com/oauth2/callback where example.com is your domain name. After creating an application you will have Client ID and Client Secret which we will need in next step. There are a lot of docker images for OAuth proxy, but we can not use them because they do not support domain white-listing. The problem is that such functionality has not implemented yet. Actualy there are several PRs that solve that problem but seems to be they frozen for an unknown amount of time. So, the only thing I could do is to merge one of the PRs to current master and build own image. You also can use my image, but if you worry about security just clone my fork and build image yourself. Let’s create a namespace and set it as current: oauth-proxy.deployment.yml oauth-service.yml oauth-ing.yml You can update ingress that we used while configuring nginx-ingress or create a new one: example-ing.yml Then visit service.example.com and you will be redirected to GitHub authorization page: And once you authenticate, you will have access to all your services under ingress that point to auth.example.com until cookie expires. And that’s it! Now you can put any of your internal services behind ingress with OAuth. Here is a list of resources that helped me to go through this proccess first time: We need to share a single pair of login and password for all services among all developers Developers will be asked to enter credentials each time when they access service first time Developers open https://kibana.example.com which is internal service Browser redirects them to https://auth.example.com where they sign in After successful authentication browser redirects them to https://kibana.example.com https://eng.fromatob.com/post/2017/02/lets-encrypt-oauth-2-and-kubernetes-ingress/ https://www.midnightfreddie.com/oauth2-proxy.html https://thenewstack.io/single-sign-on-for-kubernetes-dashboard-experience/

0 views
alikhil 7 years ago

Say hello to Hugo

Today I migrated my blog to Hugo engine. So, it’s my first post here, yaay! There were only 2 posts in my last blog and I decided to not migrate the one about creating blog in Jekyll. Since it is not actual now.

0 views
alikhil 7 years ago

Go Quickstart

Hi folks! It’s been a long time since I have published the last post, but now I came back with short quickstart guide in Go . In this tutorial, we will configure Go environment in VS Code and write our first program in Go. The first thing that you need to do it’s to install Go on your computer. To do so, download installer for your operating system from here and then run the installer. By language convention, Go developers store all their code in a single place called workspace . Go also puts dependency packages in the workspace. So, in order to Go perform correctly, we need to set variable with the path to the workspace. Set the envar with workspace Also, we need to add to in order to run compiler Go programs: Install official Go extension . Install delve debugger: I recommend you to add the following lines to your VS Code user settings: settings.json Create envar: Also, we need to add to in order to run compiler Go programs: Move to your directory. Create a directory for your project: Open it using vscode: Let’s create a file named and put the following code there: Finally, to run the program by pressing the button in VS Code and you should see the message printed to Debug Console . That’s all! My congratulations, you have just written your first program in Go! If you fail to run your program and there is some message like “Cannot find a path to ” . Try to add to your envar with path directory where binary is stored. For example in MacOS I have added following line to my :

0 views