Latest Posts (16 found)
Anton Zhiyanov 4 days ago

Go proposal: Goroutine metrics

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Export goroutine-related metrics from the Go runtime. Ver. 1.26 • Stdlib • Medium impact New metrics in the package give better insight into goroutine scheduling: Go's runtime/metrics package already provides a lot of runtime stats, but it doesn't include metrics for goroutine states or thread counts. Per-state goroutine metrics can be linked to common production issues. An increasing waiting count can show a lock contention problem. A high not-in-go count means goroutines are stuck in syscalls or cgo. A growing runnable backlog suggests the CPUs can't keep up with demand. Observability systems can track these counters to spot regressions, find scheduler bottlenecks, and send alerts when goroutine behavior changes from the usual patterns. Developers can use them to catch problems early without needing full traces. Add the following metrics to the package: The per-state numbers are not guaranteed to add up to the live goroutine count ( , available since Go 1.16). All metrics use uint64 counters. Start some goroutines and print the metrics after 100 ms of activity: No surprises here: we read the new metric values the same way as before — using metrics.Read . 𝗣 15490 • 𝗖𝗟 690397 , 690398 , 690399 P.S. If you are into goroutines, check out my interactive book on concurrency Total number of goroutines since the program started. Number of goroutines in each state. Number of active threads.

0 views
Anton Zhiyanov 6 days ago

Gist of Go: Concurrency testing

This is a chapter from my book on Go concurrency , which teaches the topic from the ground up through interactive examples. Testing concurrent programs is a lot like testing single-task programs. If the code is well-designed, you can test the state of a concurrent program with standard tools like channels, wait groups, and other abstractions built on top of them. But if you've made it so far, you know that concurrency is never that easy. In this chapter, we'll go over common testing problems and the solutions that Go offers. Waiting for goroutines • Checking channels • Checking for leaks • Durable blocking • Instant waiting • Time inside the bubble • Thoughts on time 1  ✎ • Thoughts on time 2  ✎ • Checking for cleanup • Bubble rules • Keep it up Let's say we want to test this function: Calculations run asynchronously in a separate goroutine. However, the function returns a result channel, so this isn't a problem: At point ⓧ, the test is guaranteed to wait for the inner goroutine to finish. The rest of the test code doesn't need to know anything about how concurrency works inside the function. Overall, the test isn't any more complicated than if were synchronous. But we're lucky that returns a channel. What if it doesn't? Let's say the function looks like this: We write a simple test and run it: The assertion fails because at point ⓧ, we didn't wait for the inner goroutine to finish. In other words, we didn't synchronize the and goroutines. That's why still has its initial value (0) when we do the check. We can add a short delay with : The test is now passing. But using to sync goroutines isn't a great idea, even in tests. We don't want to set a custom delay for every function we're testing. Also, the function's execution time may be different on the local machine compared to a CI server. If we use a longer delay just to be safe, the tests will end up taking too long to run. Sometimes you can't avoid using in tests, but since Go 1.25, the package has made these cases much less common. Let's see how it works. The package has a lot going on under the hood, but its public API is very simple: The function creates an isolated bubble where you can control time to some extent. Any new goroutines started inside this bubble become part of the bubble. So, if we wrap the test code with , everything will run inside the bubble — the test code, the function we're testing, and its goroutine. At point ⓧ, we want to wait for the goroutine to finish. The function comes to the rescue! It blocks the calling goroutine until all other goroutines in the bubble are finished. (It's actually a bit more complicated than that, but we'll talk about it later.) In our case, there's only one other goroutine (the inner goroutine), so will pause until it finishes, and then the test will move on. Now the test passes instantly. That's better! ✎ Exercise: Wait until done Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. As we've seen, you can use to wait for the tested goroutine to finish, and then check the state of the data you are interested in. You can also use it to check the state of channels. Let's say there's a function that generates N numbers like 11, 22, 33, and so on: And a simple test: Set N=2, get the first number from the generator's output channel, then get the second number. The test passed, so the function works correctly. But does it really? Let's use in "production": Panic! We forgot to close the channel when exiting the inner goroutine, so the for-range loop waiting on that channel got stuck. Let's fix the code: And add a test for the channel state: The test is still failing, even though we're now closing the channel when the goroutine exits. This is a familiar problem: at point ⓧ, we didn't wait for the inner goroutine to finish. So when we check the channel, it hasn't closed yet. That's why the test fails. We can delay the check using : But it's better to use : At point ⓧ, blocks the test until the only other goroutine (the inner goroutine) finishes. Once the goroutine has exited, the channel is already closed. So, in the select statement, the case triggers with set to , allowing the test to pass. As you can see, the package helped us avoid delays in the test, and the test itself didn't get much more complicated. As we've seen, you can use to wait for the tested goroutine to finish, and then check the state of the data or channels. You can also use it to detect goroutine leaks. Let's say there's a function that runs the given functions concurrently and sends their results to an output channel: And a simple test: Send three functions to be executed, get the first result from the output channel, and check it. The test passed, so the function works correctly. But does it really? Let's run three times, passing three functions each time: After 50 ms — when all the functions should definitely have finished — there are still 9 running goroutines ( ). In other words, all the goroutines are stuck. The reason is that the channel is unbuffered. If the client doesn't read from it, or doesn't read all the results, the goroutines inside get blocked when they try to send the result of to . Let's fix this by adding a buffer of the right size to the channel: Then add a test to check the number of goroutines: The test is still failing, even though the channel is now buffered, and the goroutines shouldn't block on sending to it. This is a familiar problem: at point ⓧ, we didn't wait for the running goroutines to finish. So is greater than zero, which makes the test fail. We can delay the check using (not recommended), or use a third-party package like goleak (a better option): The test passes now. By the way, goleak also uses internally, but it does so much more efficiently. It tries up to 20 times, with the wait time between checks increasing exponentially, starting at 1 microsecond and going up to 100 milliseconds. This way, the test runs almost instantly. Even better, we can check for leaks without any third-party packages by using : Earlier, I said that blocks the calling goroutine until all other goroutines finish. Actually, it's a bit more complicated. blocks until all other goroutines either finish or become durably blocked . We'll talk about "durably" later. For now, let's focus on "become blocked." Let's temporarily remove the buffer from the channel and check the test results: Here's what happens: Next, comes into play. It not only starts the bubble goroutine, but also tries to wait for all child goroutines to finish before it returns. If sees that some goroutines are stuck (in our case, all 9 are blocked trying to send to the channel), it panics: main bubble goroutine has exited but blocked goroutines remain So, we found the leak without using or goleak, thanks to the useful features of and : Now let's make the channel buffered and run the test again: As we've found, blocks until all goroutines in the bubble — except the one that called — have either finished or are durably blocked. Let's figure out what "durably blocked" means. For , a goroutine inside a bubble is considered durably blocked if it is blocked by any of the following operations: Other blocking operations are not considered durable, and ignores them. For example: The distinction between "durable" and other types of blocks is just a implementation detail of the package. It's not a fundamental property of the blocking operations themselves. In real-world applications, this distinction doesn't exist, and "durable" blocks are neither better nor worse than any others. Let's look at an example. Let's say there's a type that performs some asynchronous computation: Our goal is to write a test that checks the result while the calculation is still running . Let's see how the test changes depending on how is implemented (except for the version — we'll cover that one a bit later). Let's say is implemented using a done channel: Naive test: The check fails because when is called, the goroutine in hasn't set yet. Let's use to wait until the goroutine is blocked at point ⓧ: In ⓧ, the goroutine is blocked on reading from the channel. This channel is created inside the bubble, so the block is durable. The call in the test returns as soon as happens, and we get the current value of . Let's say is implemented using select: Let's use to wait until the goroutine is blocked at point ⓧ: In ⓧ, the goroutine is blocked on a select statement. Both channels used in the select ( and ) are created inside the bubble, so the block is durable. The call in the test returns as soon as happens, and we get the current value of . Let's say is implemented using a wait group: Let's use to wait until the goroutine is blocked at point ⓧ: In ⓧ, the goroutine is blocked on the wait group's call. The group's method was called inside the bubble, so this is a durable block. The call in the test returns as soon as happens, and we get the current value of . Let's say is implemented using a condition variable: Let's use to wait until the goroutine is blocked at point ⓧ: In ⓧ, the goroutine is blocked on the condition variable's call. This is a durable block. The call returns as soon as happens, and we get the current value of . Let's say is implemented using a mutex: Let's try using to wait until the goroutine is blocked at point ⓧ: In ⓧ, the goroutine is blocked on the mutex's call. doesn't consider blocking on a mutex to be durable. The call ignores the block and never returns. The test hangs and only fails when the overall timeout is reached. You might be wondering why the authors didn't consider blocking on mutexes to be durable. There are a couple of reasons: ⌘ ⌘ ⌘ Let's go back to the original question: how does the test change depending on how is implemented? It doesn't change at all. We used the exact same test code every time: If your program uses durably blocking operations, always works the same way: Very convenient! ✎ Exercise: Blocking queue Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. Inside the bubble, time works differently. Instead of using a regular wall clock, the bubble uses a fake clock that can jump forward to any point in the future. This can be quite handy when testing time-sensitive code. Let's say we want to test this function: The positive scenario is straightforward: send a value to the channel, call the function, and check the result: The negative scenario, where the function times out, is also pretty straightforward. But the test takes the full three seconds to complete: We're actually lucky the timeout is only three seconds. It could have been as long as sixty! To make the test run instantly, let's wrap it in : Note that there is no call here, and the only goroutine in the bubble (the root one) gets durably blocked on a select statement in . Here's what happens next: Thanks to the fake clock, the test runs instantly instead of taking three seconds like it would with the "naive" approach. You might have noticed that quite a few circumstances coincided here: We'll look at the alternatives soon, but first, here's a quick exercise. ✎ Exercise: Wait, repeat Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. The fake clock in can be tricky. It move forward only if: ➊ all goroutines in the bubble are durably blocked; ➋ there's a future moment when at least one goroutine will unblock; and ➌ isn't running. Let's look at the alternatives. I'll say right away, this isn't an easy topic. But when has time travel ever been easy? :) Here's the function we're testing: Let's run in a separate goroutine, so there will be two goroutines in the bubble: panicked because the root bubble goroutine finished while the goroutine was still blocked on a select. Reason: only advances the clock if all goroutines are blocked — including the root bubble goroutine. How to fix: Use to make sure the root goroutine is also durably blocked. Now all three conditions are met again (all goroutines are durably blocked; the moment of future unblocking is known; there is no call to ). The fake clock moves forward 3 seconds, which unblocks the goroutine. The goroutine finishes, leaving only the root one, which is still blocked on . The clock moves forward another 2 seconds, unblocking the root goroutine. The assertion passes, and the test completes successfully. But if we run the test with the race detector enabled (using the flag), it reports a data race on the variable: Logically, using in the root goroutine doesn't guarantee that the goroutine (which writes to the variable) will finish before the root goroutine reads from . That's why the race detector reports a problem. Technically, the test passes because of how is implemented, but the race still exists in the code. The right way to handle this is to call after : Calling ensures that the goroutine finishes before the root goroutine reads , so there's no data race anymore. Here's the function we're testing: Let's replace in the root goroutine with : panicked because the root bubble goroutine finished while the goroutine was still blocked on a select. Reason: only advances the clock if there is no active running. If all bubble goroutines are durably blocked but a is running, won't advance the clock. Instead, it will simply finish the call and return control to the goroutine that called it (in this case, the root bubble goroutine). How to fix: don't use . Let's update to use context cancellation instead of a timer: We won't cancel the context in the test: panicked because all goroutines in the bubble are hopelessly blocked. Reason: only advances the clock if it knows how much to advance it. In this case, there is no future moment that would unblock the select in . How to fix: Manually unblock the goroutine and call to wait for it to finish. Now, cancels the context and unblocks the select in , while makes sure the goroutine finishes before the test checks and . Let's update to lock the mutex before doing any calculations: In the test, we'll lock the mutex before calling , so it will block: The test failed because it hit the overall timeout set in . Reason: only works with durable blocks. Blocking on a mutex lock isn't considered durable, so the bubble can't do anything about it — even though the sleeping inner goroutine would have unlocked the mutex in 10 ms if the bubble had used the wall clock. How to fix: Don't use . Now the mutex unlocks after 10 milliseconds (wall clock), finishes successfully, and the check passes. The clock inside the buuble won't move forward if: ✎ Exercise: Asynchronous repeater Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. Let's practice understanding time in the bubble with some thinking exercises. Try to solve the problem in your head before using the playground. Here's a function that performs synchronous work: And a test for it: What is the test missing at point ⓧ? ✓ Thoughts on time 1 There's only one goroutine in the test, so when gets blocked by , the time in the bubble jumps forward by 3 seconds. Then sets to and finishes. Finally, the test checks and passes successfully. No need to add anything. Let's keep practicing our understanding of time in the bubble with some thinking exercises. Try to solve the problem in your head before using the playground. Here's a function that performs asynchronous work: And a test for it: What is the test missing at point ⓧ? ✓ Thoughts on time 2 Let's go over the options. ✘ synctest.Wait This won't help because returns as soon as inside is called. The check fails, and panics with the error: "main bubble goroutine has exited but blocked goroutines remain". ✘ time.Sleep Because of the call in the root goroutine, the wait inside in is already over by the time is checked. However, there's no guarantee that has run yet. That's why the test might pass or might fail. ✘ synctest.Wait, then time.Sleep This option is basically the same as just using , because returns before the in even starts. The test might pass or might fail. ✓ time.Sleep, then synctest.Wait This is the correct answer: Since the root goroutine isn't blocked, it checks while the goroutine is blocked by the call. The check fails, and panics with the message: "main bubble goroutine has exited but blocked goroutines remain". Sometimes you need to test objects that use resources and should be able to release them. For example, this could be a server that, when started, creates a pool of network connections, connects to a database, and writes file caches. When stopped, it should clean all this up. Let's see how we can make sure everything is properly stopped in the tests. We're going to test this server: Let's say we wrote a basic functional test: The test passes, but does that really mean the server stopped when we called ? Not necessarily. For example, here's a buggy implementation where our test would still pass: As you can see, the author simply forgot to stop the server here. To detect the problem, we can wrap the test in and see it panic: The server ignores the call and doesn't stop the goroutine running inside . Because of this, the goroutine gets blocked while writing to the channel. When finishes, it detects the blocked goroutine and panics. Let's fix the server code (to keep things simple, we won't support multiple or calls): Now the test passes. Here's how it works: Instead of using to stop something, it's common to use the method. It registers a function that will run when the test finishes: Functions registered with run in last-in, first-out (LIFO) order, after all deferred functions have executed. In the test above, there's not much difference between using and . But the difference becomes important if we move the server setup into a separate helper function, so we don't have to repeat the setup code in different tests: The approach doesn't work because it calls when returns — before the test assertions run: The approach works because it calls when has finished — after all the assertions have already run: Sometimes, a context ( ) is used to stop the server instead of a separate method. In that case, our server interface might look like this: Now we don't even need to use or to check whether the server stops when the context is canceled. Just pass as the context: returns a context that is automatically created when the test starts and is automatically canceled when the test finishes. Here's how it works: To check for stopping via a method or function, use or . To check for cancellation or stopping via context, use . Inside a bubble, returns a context whose channel is associated with the bubble. The context is automatically canceled when ends. Functions registered with inside the bubble run just before finishes. Let's go over the rules for living in the bubble. The following operations durably block a goroutine: The limitations are quite logical, and you probably won't run into them. Don't create channels or objects that contain channels (like tickers or timers) outside the bubble. Otherwise, the bubble won't be able to manage them, and the test will hang: Don't access synchronization primitives associated with a bubble from outside the bubble: Don't call , , or inside a bubble: Don't call inside the bubble: Don't call from outside the bubble: Don't call concurrently from multiple goroutines: ✎ Exercise: Testing a pipeline Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. The package is a complicated beast. But now that you've studied it, you can test concurrent programs no matter what synchronization tools they use—channels, selects, wait groups, timers or tickers, or even . In the next chapter, we'll talk about concurrency internals (coming soon). Pre-order for $10   or read online Three calls to start 9 goroutines. The call to blocks the root bubble goroutine ( ). One of the goroutines finishes its work, tries to write to , and gets blocked (because no one is reading from ). The same thing happens to the other 8 goroutines. sees that all the child goroutines in the bubble are blocked, so it unblocks the root goroutine. The root goroutine finishes. unblocks as soon as all other goroutines are durably blocked. panics when finished if there are still blocked goroutines left in the bubble. Sending to or receiving from a channel created within the bubble. A select statement where every case is a channel created within the bubble. Calling if all calls were made inside the bubble. Sending to or receiving from a channel created outside the bubble. Calling or . I/O operations (like reading a file from disk or waiting for a network response). System calls and cgo calls. Mutexes are usually used to protect shared state, not to coordinate goroutines (the example above is completely unrealistic). In tests, you usually don't need to pause before locking a mutex to check something. Mutex locks are usually held for a very short time, and mutexes themselves need to be as fast as possible. Adding extra logic to support could slow them down in normal (non-test) situations. It waits until all other goroutines in the bubble are blocked. Then, it unblocks the goroutine that called it. The bubble checks if the goroutine can be unblocked by waiting. In our case, it can — we just need to wait 3 seconds. The bubble's clock instantly jumps forward 3 seconds. The select in chooses the timeout case, and the function returns . The test assertions for and both pass successfully. There's no call. There's only one goroutine. The goroutine is durably blocked. It will be unblocked at certain point in the future. There are any goroutines that aren't durably blocked. It's unclear how much time to advance. is running. Because of the call in the root goroutine, the wait inside in is already over by the time is checked. Because of the call, the goroutine is guaranteed to finish (and hence to call ) before is checked. The main test code runs. Before the test finishes, the deferred is called. In the server goroutine, the case in the select statement triggers, and the goroutine ends. sees that there are no blocked goroutines and finishes without panicking. The main test code runs. Before the test finishes, the context is automatically canceled. The server goroutine stops (as long as the server is implemented correctly and checks for context cancellation). sees that there are no blocked goroutines and finishes without panicking. A bubble is created by calling . Each call creates a separate bubble. Goroutines started inside the bubble become part of it. The bubble can only manage durable blocks. Other types of blocks are invisible to it. If all goroutines in the bubble are durably blocked with no way to unblock them (such as by advancing the clock or returning from a call), panics. When finishes, it tries to wait for all child goroutines to complete. However, if even a single goroutine is durably blocked, panics. Calling returns a context whose channel is associated with the bubble. Functions registered with run inside the bubble, immediately before returns. Calling in a bubble blocks the goroutine that called it. returns when all other goroutines in the bubble are durably blocked. returns when all other goroutines in the bubble have finished. The bubble uses a fake clock (starting at 2000-01-01 00:00:00 UTC). Time in the bubble only moves forward if all goroutines are durably blocked. Time advances by the smallest amount needed to unblock at least one goroutine. If the bubble has to choose between moving time forward or returning from a running , it returns from . A blocking send or receive on a channel created within the bubble. A blocking select statement where every case is a channel created within the bubble. Calling if all calls were made inside the bubble.

0 views
Anton Zhiyanov 2 weeks ago

Go proposal: Context-aware Dialer methods

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Add context-aware, network-specific methods to the type. Ver. 1.26 • Stdlib • Low impact The type connects to the address using a given network (protocol) — TCP, UDP, IP, or Unix sockets. The new context-aware methods ( , , , and ) combine the efficiency of the existing network-specific functions (which skip address resolution and dispatch) with the cancellation capabilities of . The package already has top-level functions for different networks ( , , , and ), but these were made before was introduced, so they don't support cancellation: On the other hand, the type has a general-purpose method. It supports cancellation and can be used to connect to any of the known networks: However, if you already know the network type and address, using is a bit less efficient than network-specific functions like due to: Address resolution overhead: handles address resolution internally (like DNS lookups and converting to or ) using the network and address strings you provide. Network-specific functions accept a pre-resolved address object, so they skip this step. Network type dispatch: must route the call to the protocol-specific dialer. Network-specific functions already know which protocol to use, so they skip this step. So, network-specific functions in the package are more efficient, but they don't support cancellation. The type supports cancellation, but it's less efficient. This proposal aims to solve the mismatch by adding context-aware, network-specific methods to the type. Also, adding new methods to the lets you use the newer address types from the package (like instead of ), which are preferred in modern Go code. Add four new methods to the : The method signatures are similar to the existing top-level functions, but they also accept a context and use the newer address types from the package. Use the method to connect to a TCP server: Use the method to connect to a Unix socket: In both cases, the dialing fails because I didn't bother to start the server in the playground :) 𝗣 49097 • 𝗖𝗟 657296 Address resolution overhead: handles address resolution internally (like DNS lookups and converting to or ) using the network and address strings you provide. Network-specific functions accept a pre-resolved address object, so they skip this step. Network type dispatch: must route the call to the protocol-specific dialer. Network-specific functions already know which protocol to use, so they skip this step.

1 views
Anton Zhiyanov 1 months ago

Go proposal: Compare IP subnets

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Compare IP address prefixes the same way IANA does. Ver. 1.26 • Stdlib • Low impact An IP address prefix represents a IP subnet. These prefixes are usually written in CIDR notation: In Go, an IP prefix is represented by the type. The new method lets you compare two IP prefixes, making it easy to sort them without having to write your own comparison code. The imposed order matches both Python's implementation and the assumed order from IANA. When the Go team initially designed the IP subnet type ( ), they chose not to add a method because there wasn't a widely accepted way to order these values. Because of this, if a developer needs to sort IP subnets — for example, to organize routing tables or run tests — they have to write their own comparison logic. This results in repetitive and error-prone code. The proposal aims to provide a standard way to compare IP prefixes. This should reduce boilerplate code and help programs sort IP subnets consistently. Add the method to the type: orders two prefixes as follows: This follows the same order as Python's and the standard IANA convention . Sort a list of IP prefixes: 𝗣 61642 • 𝗖𝗟 700355 First by validity (invalid before valid). Then by address family (IPv4 before IPv6). Then by masked IP address (network IP). Then by prefix length. Then by unmasked address (original IP).

1 views
Anton Zhiyanov 1 months ago

High-precision date/time in C

I've created a C library called vaqt that offers data types and functions for handling time and duration, with nanosecond precision. Works with C99 (C11 is recommended on Windows for higher precision). is a partial port of Go's package. It works with two types of values: and . Time is a pair (seconds, nanoseconds), where is the 64-bit number of seconds since zero time (0001-01-01 00:00:00 UTC) and is the number of nanoseconds within the current second (0-999999999). Time can represent dates billions of years in the past or future with nanosecond precision. Time is always operated in UTC, but you can convert it from/to a specific timezone. Duration is a 64-bit number of nanoseconds. It can represent values up to about 290 years. provides functions for common date and time operations. Creating time values: Extracting time fields: Calendar time: Time comparison: Time arithmetic: Formatting: Marshaling: Check the API reference for more details. Here's a basic example of how to use to work with time: If you work with date and time in C, you might find useful. See the nalgeon/vaqt repo for all the details.

0 views
Anton Zhiyanov 2 months ago

Gist of Go: Atomics

This is a chapter from my book on Go concurrency , which teaches the topic from the ground up through interactive examples. Some concurrent operations don't require explicit synchronization. We can use these to create lock-free types and functions that are safe to use from multiple goroutines. Let's dive into the topic! Non-atomic increment • Atomic operations • Composition • Atomic vs. mutex • Keep it up Suppose multiple goroutines increment a shared counter: There are 5 goroutines, and each one increments 10,000 times, so the final result should be 50,000. But it's usually less. Let's run the code a few more times: The race detector is reporting a problem: This might seem strange — shouldn't the operation be atomic? Actually, it's not. It involves three steps (read-modify-write): If two goroutines both read the value , then each increments it and writes it back, the new will be instead of like it should be. As a result, some increments to the counter will be lost, and the final value will be less than 50,000. As we talked about in the Race conditions chapter, you can make an operation atomic by using mutexes or other synchronization tools. But for this chapter, let's agree not to use them. Here, when I say "atomic operation", I mean an operation that doesn't require the caller to use explicit locks, but is still safe to use in a concurrent environment. An operation without synchronization can only be truly atomic if it translates to a single processor instruction. Such operations don't need locks and won't cause issues when called concurrently (even the write operations). In a perfect world, every operation would be atomic, and we wouldn't have to deal with mutexes. But in reality, there are only a few atomics, and they're all found in the package. This package provides a set of atomic types: Each atomic type provides the following methods: reads the value of a variable, sets a new value: sets a new value (like ) and returns the old one: sets a new value only if the current value is still what you expect it to be: Numeric types also provide an method that increments the value by the specified amount: And the / methods for bitwise operations (Go 1.23+): All methods are translated to a single CPU instruction, so they are safe for concurrent calls. Strictly speaking, this isn't always true. Not all processors support the full set of concurrent operations, so sometimes more than one instruction is needed. But we don't have to worry about that — Go guarantees the atomicity of operations for the caller. It uses low-level mechanisms specific to each processor architecture to do this. Like other synchronization primitives, each atomic variable has its own internal state. So, you should only pass it as a pointer, not by value, to avoid accidentally copying the state. When using , all loads and stores should use the same concrete type. The following code will cause a panic: Now, let's go back to the counter program: And rewrite it to use an atomic counter: Much better! ✎ Exercise: Atomic counter +1 more Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. An atomic operation in a concurrent program is a great thing. Such operation usually transforms into a single processor instruction, and it does not require locks. You can safely call it from different goroutines and receive a predictable result. But what happens if you combine atomic operations? Let's find out. Let's look at a function that increments a counter: As you already know, isn't safe to call from multiple goroutines because causes a data race. Now I will try to fix the problem and propose several options. In each case, answer the question: if you call from 100 goroutines, is the final value of the guaranteed? Is the value guaranteed? It is guaranteed. Is the value guaranteed? It's not guaranteed. Is the value guaranteed? It's not guaranteed. People sometimes think that the composition of atomic operations also magically becomes an atomic operation. But it doesn't. For example, the second of the above examples: Call 100 times from different goroutines: Run the program with the flag — there are no races: But can we be sure what the final value of will be? Nope. and calls are interleaved from different goroutines. This causes a race condition (not to be confused with a data race) and leads to an unpredictable value. Check yourself by answering the question: in which example is an atomic operation? In none of them. In all examples, is not an atomic operation. The composition of atomics is always non-atomic. The first example, however, guarantees the final value of the in a concurrent environment: If we run 100 goroutines, the will ultimately equal 200. The reason is that is a sequence-independent operation. The runtime can perform such operations in any order, and the result will not change. The second and third examples use sequence-dependent operations. When we run 100 goroutines, the order of operations is different each time. Therefore, the result is also different. A bulletproof way to make a composite operation atomic and prevent race conditions is to use a mutex: But sometimes an atomic variable with is all you need. Let's look at an example. ✎ Exercise: Concurrent-safe stack Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. Let's say we have a gate that needs to be closed: In a concurrent environment, there are data races on the field. We can fix this with a mutex: Alternatively, we can use on an atomic instead of a mutex: The type is now more compact and simple. This isn't a very common use case — we usually want a goroutine to wait on a locked mutex and continue once it's unlocked. But for "early exit" situations, it's perfect. Atomics are a specialized but useful tool. You can use them for simple counters and flags, but be very careful when using them for more complex operations. You can also use them instead of mutexes to exit early. In the next chapter, we'll talk about testing concurrent code (coming soon). Pre-order for $10   or read online Read the current value of . Add one to it. Write the new value back to . — a boolean value; / — a 4- or 8-byte integer; / — a 4- or 8-byte unsigned integer; — a value of type; — a pointer to a value of type (generic).

0 views
Anton Zhiyanov 2 months ago

Go proposal: Hashers

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Provide a consistent approach to hashing and equality checks in custom data structures. Ver. 1.26 • Stdlib • Medium impact The new interface is the standard way to hash and compare elements in custom collections: The type is the default hasher implementation for comparable types, like numbers, strings, and structs with comparable fields. The package offers hash functions for byte slices and strings, but it doesn't provide any guidance on how to create custom hash-based data structures. The proposal aims to improve this by introducing hasher — a standardized interface for hashing and comparing the members of a collection, along with a default implementation. Add the hasher interface to the package: Along with the default hasher implementation for comparable types: Here's a case-insensitive string hasher: And a generic that uses a pluggable hasher for custom equality and hashing: The helper method uses the hasher to compute the hash of a value: This hash is used in the and methods. It acts as a key in the bucket map to find the right bucket for a value. checks if the value exists in the corresponding bucket: adds a value to the corresponding bucket: Now we can create a case-insensitive string set: Or a regular string set using : 𝗣 70471 • 𝗖𝗟 657296 (in progress)

2 views
Anton Zhiyanov 2 months ago

Write the damn code

Here's some popular programming advice these days: Learn to decompose problems into smaller chunks, be specific about what you want, pick the right AI model for the task, and iterate on your prompts . Don't do this. I mean, "learn to decompose the problem" — sure. "Iterate on your prompts" — not so much. Write the actual code instead: You probably see the pattern now. Get involved with the code, don't leave it all to AI. If, given the prompt, AI does the job perfectly on first or second iteration — fine. Otherwise, stop refining the prompt. Go write some code, then get back to the AI. You'll get much better results. Don't get me wrong: this is not anti-AI advice. Use it, by all means. Use it a lot if you want to. But don't fall into the trap of endless back-and-forth prompt refinement, trying to get the perfect result from AI by "programming in English". It's an imprecise, slow and terribly painful way to get things done. Get your hands dirty. Write the code. It's what you are good at. You are a software engineer. Don't become a prompt refiner.

1 views
Anton Zhiyanov 2 months ago

Go is #2 among newer languages

I checked out several programming languages rankings. If you only include newer languages (version 1.0 released after 2010), the top 6 are: ➀ TypeScript, ➁ Go, ➂ Rust, ➃ Kotlin, ➄ Dart, and ➅ Swift. Sources: IEEE , Stack Overflow , Languish . I'm not using TIOBE because their method has major flaws. TypeScript's position is very strong, of course (I guess no one likes JavaScript these days). And it's great to see that more and more developers are choosing Go for the backend. Also, Rust scores very close in all rankings except IEEE, so we'll see what happens in the coming years.

0 views
Anton Zhiyanov 2 months ago

Go proposal: new(expr)

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Allow the built-in to be called on expressions. Ver. 1.26 • Language • High impact Previously, you could only use the built-in with types: Now you can also use it with expressions: If the argument is an expression of type T, then allocates a variable of type T, initializes it to the value of , and returns its address, a value of type . There's an easy way to create a pointer to a composite literal: But no easy way to create a pointer to a value of simple type: The proposal aims to fix this. Update the Allocation section of the language specification as follows: The built-in function creates a new, initialized variable and returns a pointer to it. It accepts a single argument, which may be either an expression or a type. ➀ If the argument is an expression of type T, or an untyped constant expression whose default type is T, then allocates a variable of type T, initializes it to the value of , and returns its address, a value of type . ➁ If the argument is a type T, then allocates a variable initialized to the zero value of type T. For example, and each return a pointer to a new variable of type int. The value of the first variable is 123, and the value of the second is 0. ➀ is the new part, ➁ already worked as described. Pointer to a simple type: Pointer to a composite value: Pointer to the result of a function call: Passing is still not allowed: 𝗣 45624 • 𝗖𝗟 704935 , 704737 , 704955 , 705157

0 views
Anton Zhiyanov 2 months ago

Accepted! Go proposals distilled

I'm launching a new Go-related series named Accepted! For each accepted proposal, I'll write a one-page summary that explains the change in simple terms. This should (hopefully) be the easiest way to keep up with upcoming changes without having to read through 2,364 comments on Go's GitHub repo. Here's a sneak peak: The plan is to publish the already accepted proposals from the upcoming 1.26 release, and then publish new ones as they get accepted. I'll probably skip the minor ones, but we'll see. Stay tuned!

0 views
Anton Zhiyanov 2 months ago

Native threading and multiprocessing in Go

As you probably know, the only way to run tasks concurrently in Go is by using goroutines. But what if we bypass the runtime and run tasks directly on OS threads or even processes? I decided to give it a try. To safely manage threads and processes in Go, I'd normally need to modify Go's internals. But since this is just a research project, I chose to (ab)use cgo and syscalls instead. That's how I created multi — a small package that explores unconventional ways to handle concurrency in Go. Features • Goroutines • Threads • Processes • Benchmarks • Final thoughts Multi offers three types of "concurrent groups". Each one has an API similar to , but they work very differently under the hood: runs Go functions in goroutines that are locked to OS threads. Each function executes in its own goroutine. Safe to use in production, although unnecessary, because the regular non-locked goroutines work just fine. runs Go functions in separate OS threads using POSIX threads. Each function executes in its own thread. This implementation bypasses Go's runtime thread management. Calling Go code from threads not created by the Go runtime can lead to issues with garbage collection, signal handling, and the scheduler. Not meant for production use. runs Go functions in separate OS processes. Each function executes in its own process forked from the main one. This implementation uses process forking, which is not supported by the Go runtime and can cause undefined behavior, especially in programs with multiple goroutines or complex state. Not meant for production use. All groups offer an API similar to . Runs Go functions in goroutines that are locked to OS threads. starts a regular goroutine for each call, and assigns it to its own thread. Here's a simplified implementation: goro/thread.go You can use channels and other standard concurrency tools inside the functions managed by the group. Runs Go functions in separate OS threads using POSIX threads. creates a native OS thread for each call. It uses cgo to start and join threads. Here is a simplified implementation: pthread/thread.go You can use channels and other standard concurrency tools inside the functions managed by the group. Runs Go functions in separate OS processes forked from the main one. forks the main process for each call. It uses syscalls to fork processes and wait for them to finish. Here is a simplified implementation: proc/process.go You can only use to exchange data between processes, since regular Go channels and other concurrency tools don't work across process boundaries. Running some CPU-bound workload (with no allocations or I/O) on Apple M1 gives these results: And here are the results from GitHub actions: One execution here means a group of 4 workers each doing 10 million iterations of generating random numbers and adding them up. See the benchmark code for details. As you can see, the default concurrency model ( in the results, using standard goroutine scheduling without meddling with threads or processes) works just fine and doesn't add any noticeable overhead. You probably already knew that, but it's always good to double-check, right? I don't think anyone will find these concurrent groups useful in real-world situations, but it's still interesting to look at possible (even if flawed) implementations and compare them to Go's default (and only) concurrency model. Check out the nalgeon/multi repo for the implementation. P.S. Want to learn more about concurrency? Check out my interactive book

0 views
Anton Zhiyanov 3 months ago

Building blocks for idiomatic Go pipelines

I've created a Go package called chans that offers generic channel operations to make it easier to build concurrent pipelines. It aims to be flexible, unopinionated, and composable, without over-abstracting or taking control away from the developer. Here's a toy example: Now let's go over the features. Filter sends values from the input channel to the output if a predicate returns true. ■ □ ■ □ → ■ ■ Map reads values from the input channel, applies a function, and sends the result to the output. ■ ■ ■ → ● ● ● Reduce combines all values from the input channel into one using a function and returns the result. ■ ■ ■ ■ → ∑ FilterOut ignores values from the input channel if a predicate returns true, otherwise sends them to the output. ■ □ ■ □ → □ □ Drop skips the first N values from the input channel and sends the rest to the output. ➊ ➋ ➌ ➍ → ➌ ➍ DropWhile skips values from the input channel as long as a predicate returns true, then sends the rest to the output. ■ ■ ▲ ● → ▲ ● Take sends up to N values from the input channel to the output. ➊ ➋ ➌ ➍ → ➊ ➋ TakeNth sends every Nth value from the input channel to the output. ➊ ➋ ➌ ➍ → ➊ ➌ TakeWhile sends values from the input channel to the output while a predicate returns true. ■ ■ ▲ ● → ■ ■ First returns the first value from the input channel that matches a predicate. ■ ■ ▲ ● → ▲ Chunk groups values from the input channel into fixed-size slices and sends them to the output. ■ ■ ■ ■ ■ → ■ ■ │ ■ ■ │ ■ ChunkBy groups consecutive values from the input channel into slices whenever the key function's result changes. ■ ■ ● ● ▲ → ■ ■ │ ● ● │ ▲ Flatten reads slices from the input channel and sends their elements to the output in order. ■ ■ │ ■ ■ │ ■ → ■ ■ ■ ■ ■ Compact sends values from the input channel to the output, skipping consecutive duplicates. ■ ■ ● ● ■ → ■ ● ■ CompactBy sends values from the input channel to the output, skipping consecutive duplicates as determined by a custom equality function. ■ ■ ● ● ■ eq→ ■ ● ■ Distinct sends values from the input channel to the output, skipping all duplicates. ■ ■ ● ● ■ → ■ ● DistinctBy sends values from the input channel to the output, skipping duplicates as determined by a key function. ■ ■ ● ● ■ key→ ■ ● Broadcast sends every value from the input channel to all output channels. ➊ ➋ ➌ ➍ ↓ ➊ ➋ ➌ ➍ ➊ ➋ ➌ ➍ Split sends values from the input channel to output channels in round-robin fashion. ➊ ➋ ➌ ➍ ↓ ➊ ➌ ➋ ➍ Partition sends values from the input channel to one of two outputs based on a predicate. ■ □ ■ □ ↓ ■ ■ □ □ Merge concurrently sends values from multiple input channels to the output, with no guaranteed order. ■ ■ ■ ● ● ● ↓ ● ● ■ ■ ■ ● Concat sends values from multiple input channels to the output, processing each input channel in order. ■ ■ ■ ● ● ● ↓ ■ ■ ■ ● ● ● Drain consumes and discards all values from the input channel. ■ ■ ■ ■ → ∅ I think third-party concurrency packages are often too opinionated and try to hide too much complexity. As a result, they end up being inflexible and don't fit a lot of use cases. For example, here's how you use the function from the rill package: The code looks simple, but it makes pretty opinionated and not very flexible: While this approach works for many developers, I personally don't like it. With , my goal was to offer a fairly low-level set of composable channel operations and let developers decide how to use them. For comparison, here's how you use the function: only implements the core mapping logic: You decide the rest: The same principles apply to other channel operations. Let's say we want to calculate the total balance of VIP user accounts: Here's how we can do it using . First, use to get the accounts from the database: Next, use to select only the VIP accounts: Next, use to calculate the total balance: Finally, check for errors and return the result: If you're building concurrent pipelines in Go, you might find useful. See the nalgeon/chans repo if you are interested. P.S. Want to learn more about concurrency? Check out my interactive book

0 views
Anton Zhiyanov 4 months ago

Gist of Go: Signaling

This is a chapter from my book on Go concurrency , which teaches the topic from the ground up through interactive examples. The main way goroutines communicate in Go is through channels. But channels aren't the only way for goroutines to signal each other. Let's try a different approach! Signaling • One-time subscription • Broadcasting • Broadcasting w/channels • Publish/subscribe • Run once • Once-functions • Object pool • Keep it up Let's say we have a goroutine that generates a random number between 1 and 100: And the second one checks if the number is lucky or not: The second goroutine will only work correctly if the first one has already set the number. So, we need to find a way to synchronize them. For example, we can make a channel: But what if we want to be a regular number, and channels are not an option? We can make the generator goroutine signal when a number is ready, and have the checker goroutine wait for that signal. In Go, we can do this using a condition variable , which is implemented with the type. A has a mutex inside it: A has two methods — and : If there are multiple waiting goroutines when is called, only one of them will be resumed. If there are no waiting goroutines, does nothing. To see why needs to go through all this mutex trouble, check out this example: Both goroutines use the shared variable, so we need to protect it with a mutex. The checker goroutine starts by locking the mutex ➌. If the generator hasn't run yet (meaning is 0), the goroutine calls ➍ and blocks. If only blocked the goroutine, the mutex would stay locked, and the generator couldn't change ➊. That's why unlocks the mutex before blocking. The generator goroutine also starts by locking the mutex ➊. After setting the value, the generator calls ➋ to let the checker know it's ready, and then unlocks the mutex. Now, if resumed ➍ did nothing, the checker goroutine would continue running. But the mutex would stay unlocked, so working with wouldn't be safe. That's why locks the mutex again after receiving the signal. In theory, everything should work. Here's the output: Everything seems fine, but there's a subtle bug. When the checker goroutine wakes up after receiving a signal, the mutex is unlocked for a brief moment before locks it again. Theoretically, in that short time, another goroutine could sneak in and set to 0. The checker goroutine wouldn't notice this and would keep running, even though it's supposed to wait if is zero. That's why, in practice, is always called inside a for loop, not inside an if statement. Not like this: But like this (note that the condition is the same as in the if statement): In most cases, this for loop will work just like an if statement: But if another goroutine intervenes between ➊ and ➋ and sets to zero, the goroutine will notice this at ➌ and go back to waiting. This way, it will never keep running when is zero — which is exactly what we want. Here's the complete example: Like other synchronization primitives, a condition variable has its own internal state. So, you should only pass it as a pointer, not by value. Even better, don't pass it at all — wrap it inside a type instead. We'll do this in the next step. ✎ Exercise: Blocking queue Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. Let's go back to the lucky numbers example: Let's refactor the code and create a type with and methods: Here's the implementation: Example usage: Note that this is a one-time signaling, not a long-term subscription. Once a subscriber goroutine receives the generated number, it is no longer subscribed to . We'll look at an example of a long-term subscription later in the chapter. Everything works, but there's still a problem. If you call from N goroutines, you can set up N subscribers, but only notifies one of them. We'll figure out how to notify all of them in the next step. Notifying all subscribers instead of just one is called broadcasting . To do this in , we only need to change one line in the method: The method wakes up one goroutine that's waiting on , while the method wakes up all goroutines waiting on . This is exactly what we need. Here's a usage example: A typical way to use a condition variable looks like this: There is a publisher goroutine and one or more subscriber goroutines. They all use some shared state protected by a condition variable . The publisher goroutine changes the shared state and notifies either one subscriber ( ) or all subscribers ( ): Note that this is a one-time notification, not a long-term subscription. Once a subscriber goroutine receives the signal, it is no longer subscribed to the publisher. We'll look at an example of a long-term subscription later in the chapter. ✎ Exercise: Barrier using a condition variable Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. As we discussed, it's easy to implement signaling using a channel: This approach only works with one receiver. If we subscribe multiple goroutines to the channel, only one of them will get the generated number. For broadcasting, we can just close the channel: However, we can only broadcast the fact that the state has changed (the channel is closed), not the actual state (the value of ). So, we still need to protect the state with a mutex. This goes against the idea of using channels to pass data between goroutines. Also, we can't send a second broadcast notification because a channel can only be closed once. We have two problems with our "broadcast by closing a channel" approach: Let's solve both problems and create a simple publish/subscribe system: Here's the type: The method adds a subscriber and returns the channel where random numbers will be sent: Note that we use as a mutex here to protect access to the shared list of subscription channels. Without it, concurrent calls would cause a data race on . Alternatively, we can use a regular channel slice and protect is with a mutex : The number generates a number and sends it to each subscriber: Our implementation drops the message if the subscriber hasn't processed the previous one. So, always works quickly and doesn't block, but slower subscribers might miss some data. Alternatively, we can use a blocking without select to make sure everyone gets all the data, but this means the whole system will only run as fast as the slowest subscriber. The method terminates all subscriptions: Here's an example with three subscribers. Each one gets three random numbers: That's it for signals and broadcasting! Now let's look at a couple more tools from the package. Let's say we have a currency converter: Exchange rates are loaded from an external API, so we decided to fetch them lazily the first time is called: Unfortunately, this creates a data race on when used in a concurrent environment: We could protect the field with a mutex. Or we could use the type. It guarantees that a function called with runs only once: makes sure that the given function runs only once. If multiple goroutines call at the same time, only one will run the function, while the others will wait until it returns. This way, all calls to are guaranteed to proceed only after the map has been filled. is perfect for one-time initialization or cleanup in a concurrent environment. No need to worry about data races! Besides the type, the package also includes three convenience once-functions that you might find useful. Let's say we have the function that returns a random number: And the function sets the variable to a random number: It's clear that calling more than once will cause a panic (I'm keeping it simple and not using goroutines here): We can fix this by wrapping in . It returns a function that makes sure the code runs only once: wraps a function that returns a single value (like our ). The first time you call the function, it runs and calculates a value. After that, every time you call it, it just returns the same value from the first call: does the same thing for a function that returns two values: Here are the signatures of all the once-functions side by side for clarity: The functions , , and are shortcuts for common ways to use the type. You can use them if they fit your situation, or use directly if they don't. ✎ Exercise: Guess the average Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it . If you are okay with just theory for now, let's continue. The last tool we'll cover is . It helps reuse memory instead of allocating it every time, which reduces the load on the garbage collector. Let's say we have a program that: It looks something like this: If we run the benchmark: Here's what we'll see: Since we're allocating a new buffer on each loop iteration, we end up with 4000 memory allocations, using a total of 4 MB of memory. Even though the garbage collector eventually frees all this memory, it's quite inefficient. Ideally, we should only need 4 buffers instead of 4000 — one for each goroutine. That's where comes in handy: ➋ takes an item from the pool. If there are no available items, it creates a new one using ➊ (which we have to define ourselves, since the pool doesn't know anything about the items it creates). ➌ returns an item back to the pool. When the first goroutine calls during the first iteration, the pool is empty, so it creates a new buffer using . In the same way, each of the other goroutines create three more buffers. These four buffers are enough for the whole program. Let's benchmark: The difference in memory usage is clear. Thanks to the pool, the number of allocations has dropped by two orders of magnitude. As a result, the program uses less memory and puts minimal pressure on the garbage collector. Things to keep in mind: is a pretty niche tool that isn't used very often. However, if your program works with temporary objects that can be reused (like in our example), it might come in handy. We've covered some of the lesser-known tools in the package — condition variables ( ), one-time execution ( ), and pools ( ): Don't use these tools just because you know they exist. Rely on common sense. In the next chapter, we'll talk about atomics (coming soon). Pre-order for $10   or read online

0 views
Anton Zhiyanov 4 months ago

Expressive tests without testify/assert

Many Go programmers prefer using if-free test assertions to make their tests shorter and easier to read. So, instead of writing if statements with : They would use (or its evil twin, ): However, I don't think you need and its 40 different assertion functions to keep your tests clean. Here's an alternative approach. The testify package also provides mocks and test suite helpers. We won't talk about these — just about assertions. Equality • Errors • Other assertions • Source code • Final thoughts The most common type of test assertion is checking for equality: Let's write a basic generic assertion helper: We have to use a helper function, because the compiler doesn't allow us to compare a typed value with an untyped : Now let's use the assertion in our test: The parameter order in is (got, want), not (want, got) like it is in testify. It just feels more natural — saying "her name is Alice" instead of "Alice is her name". Also, unlike testify, our assertion doesn't support custom error messages. When a test fails, you'll end up checking the code anyway, so why bother? The default error message shows what's different, and the line number points to the rest. is already good enough for all equality checks, which probably make up to 70% of your test assertions. Not bad for a 20-line testify alternative! But we can make it a little better, so let's not miss this chance. First, types like and have an method. We should use this method to make sure the comparison is accurate: Second, we can make comparing byte slices faster by using : Finally, let's call from our function: And test it on some values: Works like a charm! Errors are everywhere in Go, so checking for them is an important part of testing: Error checks probably make up to 30% of your test assertions, so let's create a separate function for them. First we cover the basic cases — expecting no error and expecting an error: Usually we don't fail the test when an assertion fails, to see all the errors at once instead of hunting them one by one. The "unexpected error" case (want nil, got non-nil) is the only exception: the test terminates immediately because any following assertions probably won't make sense and could cause panics. Let's see how the assertion works: So far, so good. Now let's cover the rest of error checking without introducing separate functions (ErrorIs, ErrorAs, ErrorContains, etc.) like testify does. If is an error, we'll use to check if the error matches the expected value: Usage example: If is a string, we'll check that the error message contains the expected substring: Usage example: Finally, if is a type, we'll use to check if the error matches the expected type: Usage example: One last thing: doesn't make it easy to check if there was some (non-nil) error without asserting its type or value (like in testify). Let's fix this by making the parameter optional: Usage example: Now handles all the cases we need: And it's still under 40 lines of code. Not bad, right? and probably handle 85-95% of test assertions in a typical Go project. But there's still that tricky 5-15% left. We may need to check for conditions like these: Technically, we can use . But it looks a bit ugly: So let's introduce the third and final assertion function — . It's the simplest one of all: Now these assertions look better: Here's the full annotated source code for , and : Less than 120 lines of code! I don't think we need forty assertion functions to test Go apps. Three (or even two) are enough, as long as they correctly check for equality and handle different error cases. I find the "assertion trio" — Equal, Err, and True — quite useful in practice. That's why I extracted it into the github.com/nalgeon/be mini-package. If you like the approach described in this article, give it a try!

0 views
Anton Zhiyanov 4 months ago

Redka: Redis re-implemented with SQL

I'm a big fan of Redis. It's such an amazing idea to go beyond the get-set paradigm and provide a convenient API for more complex data structures: maps, sets, lists, streams, bloom filters, etc. I'm also a big fan of relational databases and their universal language, SQL. They've really stood the test of time and have proven to solve a wide range of problems from the 1970s to today. So, naturally, one day I decided to combine the two and reimplement Redis using a relational backend — first SQLite, then Postgres. That's how Redka was born. About Redka • Use cases • Usage example • Performance • Final thoughts Redka is a software written in Go. It comes in two flavors: Redka currently supports five core Redis data types: Redka can use either SQLite or PostgreSQL as its backend. It stores data in a database with a simple schema and provides views for better introspection. Here are some situations where Redka might be helpful: Embedded cache for Go applications . If your Go app already uses SQLite or just needs a built-in key-value store, Redka is a natural fit. It gives you Redis-like features without the hassle of running a separate server. You're not limited to just get/set with expiration, of course — more advanced structures like lists, maps, and sets are also available. Lightweight testing environment . Your app uses Redis in production, but setting up a Redis server for local development or integration tests can be a hassle. Redka with an in-memory database offers a fast alternative to test containers, providing full isolation for each test run. Postgres-first data structures . If you prefer to use PostgreSQL for everything but need Redis-like data structures, Redka can use your existing database as the backend. This way, you can manage both relational data and specialized data structures with the same tools and transactional guarantees. You can run the Redka server the same way you run Redis: Then use or any Redis client for your programming language, like , , , and so on: You can also use Redka as a Go package without the server: All data is stored in the database, so you can access it using SQL views: Redka is not about raw performance. You can't beat a specialized data store like Redis with a general-purpose relational backend like SQLite. However, Redka can still handle tens of thousands of operations per second, which should be more than enough for many apps. Here are the redis-benchmark results for 1,000,000 GET/SET operations on 10,000 randomized keys. Redka (SQLite): Redka (PostgreSQL): Redka for SQLite has been around for over a year, and I recently released a new version that also supports Postgres. If you like the idea of Redis with an SQL backend — feel free to try Redka in testing or (non-critical) production scenarios. See the nalgeon/redka repo for more details.

0 views