Latest Posts (11 found)

Theorems for Free Redux

A reader recently got in touch with me regarding my 2017 blog post Review: Theorems for Free . He had some questions about the paper/my review, and upon revisiting it, I realized that I had no idea how the paper worked anymore. So I decided to rehash my understanding, and came up with something much conceptually clearer about what is happening and why. A quick summary of Theorems for Free : For any polymorphic type, we can generate a law that must hold for any value of that type. One the examples given is for the function , which states that —namely, that doesn’t change the length of the list. Theorems for Free gives a roundabout and obtuse set of rules for computing these free theorems. But, as usual, the clarity of the idea is obscured by the encoding details. The actual idea is this: Parametrically-polymorphic functions can’t branch on the specific types they are instantiated at. Because of this fact, functions must behave the same way, regardless of the type arguments passed to them. So all of the free theorems have the form “replacing the type variables before calling the function is the same as replacing the type variables after calling the function.” What does it mean to replace a type variable? Well, if we want to replace a type variable with , we will generate a fresh function , and then stick it wherever we need to. For example, given the function , we generate the free theorem: or, for the function , we get: This scheme also works for functions in multiple type parameters. Given the function , we must replace both and , giving the free theorem: In the special case where there are no type parameters, we don’t need to do anything. This is what’s happening in the example given in the introduction. Simple stuff, right? The obfuscation in the paper comes from the actual technique given to figure out where to apply these type substitutions. The paper is not fully general here, in that it only gives rules for the and type constructors (if I recall correctly.) These rules are further obscured in that they inline the definitions of , rather than writing directly. 1 But for types in one variable, is exactly the function that performs type substitution. Perhaps this paper predates typeclasses? Very possible. ↩︎ Perhaps this paper predates typeclasses? Very possible. ↩︎

0 views

Analyzing API Design via Algebraic Laws

The other day, someone asked: Why doesn’t [the Data.Map function] allow for different value types the way does? This is a very reasonable question, and it lead down an interesting rabbit hole of at the intersection of API design and efficient implementation. To answer the original question, what would the type of a different value type of look like? It would be something in the flavor of: But this new parameter is somewhat lossy, in that it gives the impression that it could be called with as parameters, which doesn’t fit into the vibe of being a “union.” So instead we could restrict that possibility by using : which seems reasonable enough. But let’s take reasonableness out of the picture and start again from first principles. Instead let’s ask ourselves the deep philsophical question of what even IS a map? A is a particularly efficient implementation of functions with type . But why is this here? It’s really only to encode the “default” value of performing a lookup. Nothing goes wrong if we generalize this to be . In fact, it helps us make sense of the right bias present in , where we see: This equality is hard to justify under the normal understanding of being an encoding of a function . But under the general monoid interpretation, we get a nice semigroup homomorphism: where the monoid in question has been specialized to be . Of course, we also have a monoid homomorphism: Let’s re-evaluate the original question in terms of this newly-generalized . Now that we’ve removed all of the unnecessary baggage of , we can again think about the desired type of : which looks awfully familiar . This new type signature automatically resolves our original concerns about “what should we do if the key isn’t present?”—just call the function with as a parameter! We can give some semantics as to what ought to do again by relating it to the observation . The relevant law here seems like it ought to be: By choosing a degenerate function , say, , where is some value that is not , we can see the beginnings of a problem: Regardless of the key we lookup in our ed , we need to get back . How can we implement such a thing? I see only two ways: #1 is clearly a non-starter, given that we want our s to be efficient encodings of functions, which leaves us with only #2. This is actually a pretty common construction, which stems immediately from the fact that a pair of monoids is itself a monoid. The construction would look something like this: Seems fine, right? The nail in the coffin comes from when we reintroduce our semigroup homomorphism: Without loss of generalization, take (where is just with a constant function.) This gives us: Making this thing efficient is a further complication! We again have two options: #1 clearly requires \(O(n)\) work, which again forces us to look at #2. But #2 seems very challenging, because the monoidal values we need to suspend need not span the entire . For example, consider a constructed a la: Representing this thing efficiently certainly isn’t impossible, but you’re not going to be able to do it on the balanced binary search trees that underlie the implementation of . I find this quite an interesting result. I always assumed that (or at least, ) didn’t have an instance because it would require a constraint on its output—but that’s not the sort of thing we can express in Haskell. But the analysis above says that’s not actually the reason! It’s that there can be no efficient implementation of , even if we could constrain the result. What I find so cool about this style of analysis is that we didn’t actually write any code, nor did we peek into the implementation of (except to know that it’s implemented as a balanced BST.) All we did was look at the obvious laws, instantiate them with degenerate inputs, and think about what would be required to to efficiently get the right answer.

0 views

Using Obscure Graph Theory to solve PL Problems

Usually I write about solutions to problems I’ve worked out, but I’ve found myself increasingly becoming interesting in where solutions come from. Maybe it’s because I’ve been reading Boorstin’s excellent The Discoverers , which I’d strongly recommend. Regardless of why, I thought I’d switch up the usual dance step today, and discuss what solving my most-recent-big-problem actually looked like, in terms of what I tried, where I looked, and what the timeline was. The problem is to serialize a program graph into a series of let-bindings. For example, given the following graph: which represents the program: Unfortunately, this is a naive representation of the program, since it duplicates the work required to compute four times, and twice. Instead, we would prefer to generate the equivalent-but-more-efficient program: This transformation is affectionately known as sharing , since it shares the computed answer whenever there is repeated work to be done. So this is what we’re trying to do. Given the original graph, determine the best place to insert these let-bindings, for some reasonable definition of “best.” We can assume there are no side effects involved, so any place that an expression is well-scoped is an acceptable solution. In order to understand some of my attempted solutions, it’s worth noting that our final solution should build something of type , and the original graph is represented as a . is the functor of , with all of its self-references replaced by some type variable, in this case . Thus, the graph above looks much more like: I spent over a year trying to solve this problem, with various mostly-working solutions during that time. My strategy here was to think really hard, write up some algorithm that seemed plausible, and then run it against our (small) battery of integration tests to make sure it got the same answer as before. Why not property test it? I tried, but found it very challenging to implement well-typed generators that would reliably introduce shared thunks. But maybe there’s a different lesson to be learned here about writing good generators. Anyway. For eight months, one of these think-really-hard algorithms fit the bill and didn’t give us any problems. It was a weird, bespoke solution to the problem that independetly kept track of all of the free variables in every graph fragment, and tried to let-bind a fragment as soon as we landed in a context where all of the free variables were in scope. It seemed to work, but it was extremely messy and unmaintainable. At the time of writing, this sharing algorithm was the only source of let-binds in our entire language, which meant that it didn’t need to account for let-binds in the program. Of course, that invariant eventually changed. We added a way in the source langauge to introduce s, which meant my algorithm was wrong. And I had written it sufficiently long ago that I no longer remembered exactly why it worked. Which meant the theory of my program was lost, and thus that we ought to rewrite it. I went back to the problem statement, and stared at it for a long time (back to the think-really-hard algorithm!) Upon staring at the problem, I realized that what I was really trying to do was determine where diamond patterns arose in the propgram graph. Recall our original graph: If we redraw it such that is on a different rank than , then the two diamond patterns become much clearer: The insight I came up with is that if a node is the source of a diamond, then we must let-bind the sink of the diamond immediately before inlining the definition of . This gives rise to the question of “how do we identify a diamond?” What we can do is give a mapping from each node to its reachable set of nodes. For example, in the above, we’d compute the map: Then when we go to inline a node, say, , we can look for any nodes that are reachable via more than one of its immediate subterms. Since the immediate subterms of are and , we can take the intersections of their reachable sets: which is exactly the set of nodes that we need to perform sharing on. If you topologically sort this set, it gives you the order that you should perform your let bindings. EXCEPT there’s a kink in the whole thing. What happens if one of the terms in this diamond contains free variables? In particular, we might have something like this: This gives us an analogous set of reachable nodes when we look at , but we obviously can’t lift above the lambda. Resolving this problem required giving up on the notion of memoizing the entire reachable set of nodes, and to instead crawl the graph ensuring that everything is well-scoped. My algorithm looked fine, and, importantly, got the right answer in a reasonable amount of time on our (small) battery of integration tests. So I shipped it, commended myself on a job well done, and thought nothing more about it. For about a week, until a bug report came in saying that our compiler now seemed to hang on big programs. Which was something I hadn’t noticed, since we didn’t have any big programs in our integration tests. Upon digging in to what exactly was so slow, I noticed that my algorithm was accidentally quadratic . I needed to fold over every node in the graph, and that required looking at the entire reachable set underneath it. I had put in some of the obvious safeguards, hoping that they would prune the search tree early, but it wasn’t enough sacrifice for the Great God of Asymptotes. Did I mention that at this point in the story, having this algorithm working fast was on the critical path of the company? Everybody else was blocked on me figuring this out. Talk about pressure! Anyway. You’ll notice above that in my description of the algorithm, everything sounds fine. But the juice is in the details, as the common saying goes. Computing reachability isn’t quite the right thing to be using here, as it gave us the wrong answer for the lambda example above. Which is unfortunate because reachability is something we can do in linear time. And then when reachability didn’t work, I just threw away the fast performance and hoped my bespoke algorithm would do the job. My only redemption comes from the fact that at least it got the right answer, even if it did so very slowly. Back to the drawing board. Whenever I have graph theory problems, I call up my boy Vikrem. He’s good at nerd stuff like this. We rubberducked the problem, and tried to reframe the problem in the language of graph theory. We had a Merkiv–Maguire moment where we indepdently realized that the goal was somehow related to finding the lowest common ancestor (LCA) of a node. Which is to say, roughly, that we are looking for forks in the diamond diagram. Which we already knew, but it was nice to have some language for. Our new problem is that LCA is defined only over trees. There are some extensions to DAGs, but none of them seem to be particularly well founded. However, searching for exactly that brought me to this stackoverflow question , where nestled in the comments is someone suggesting that the poster isn’t looking for LCA, but instead for a related notion the lowest single common ancestor. LSCA is defined in a 2010 paper New common ancestor problems in trees and directed acyclic graphs . The standard definition of is that “ is an ancestor of and of , and that no descendent of has this property.” But the definition of is that “ lies on all root-to- paths, and that lies on all root-to- paths, and that no descendent of has this property.” The distinction between the two is easily seen in the following graph: Under the standard definition, LCA is not uniquely defined for DAGs. That is, . But neither 1 nor 2 lies on all paths from the root. Under LSCA therefore we get , which is the obviously-correct place to let-bind 3 and 4. The paper gives a preprocessing scheme for computing LSCA by building a “lowest single ancestor” (LSA) tree. The LSA of a node is the LSCA of all of its in-edges. This definition cashes out to mean “the most immediate diamond above any node.” Finally! This is exactly what we’re looking for, since this is where we must insert our let-bindings! Even better, the paper gives us an algorithm for computing the LSA tree in linear time! Of course, I’m lazy and would prefer not to implement this thing. So instead I searched on hackage for , and found nothing. But then I searched for and found that, like always, Ed Kmett was 13 years ahead of me. The package implements an \(O(log n)\) algorithm for computing the LCA of any two nodes in a graph. Which is very convenient for me, since the LSCA algorithm requires being able to do this. Time to roll up the sleeves and get cracking I suppose. The paper was surprisingly straightforward, and my first attempt implemented the (imperative) algorithms as given (imperatively.) The first step is to do a topological sort on the DAG in order to know in which order one ought to unfold the LSA tree. But as is so often the case, this topological sort isn’t actually relevant to the algorithm; it’s just an encoding detail of expressing the algorithm imperatively. But you don’t need that when you’ve got laziness on your side! Instead you can just tie the know and do something cool like this: Notice how we use to bind the eventual result of the final computation. Then we can chase pointers by looking them up in —even though it’s not yet “computed.” Who cares what order the computer does it in. Why is that a thing I should need to specify? Anyway. The exact details of implementing LSA are not particularly important for the remainder of this blog post. If you’re interested, you can peep the PR, which is delightfully small . Equipped with my LSA tree, I was now ready to go back and solve the original problem of figuring out where to stick let-bindings. It’s easy now. Given the original program graph, find the LSA for each node. The LSA is the place you should insert the let binding. So given the map of nodes to their LSAs, invert that map and get back a map of nodes to descendents who have this node as an LSA. Now when you go to inline a node, just look up everything in this map and inline it first. It turns out to be a very elegant solution. It’s one third of the length of my horrible ad-hoc implementations, and it runs in linear time of the number of nodes in the graph. All in all, very good. More often than I’m comfortable about, people will ask me how I can have so many good ideas. And what I like about this story is that it’s pretty typical of how I actually “have” “good” ideas. I’m reminded of the fact that luck favors the prepared mind . Attentive readers will notice that none of this process was due to brilliance on my part. I happened to know Vikrem who’s a genius. Together we pulled at some ancient graph theory strings and remembered a fact that someone else had thought important to teach us. That wasn’t actually the right path, but it lead us to stackoverflow where someone had linked to a relevant paper. I implemented the paper using a library that someone else had done the heavy lifting on, and simplified the implementation using this knot-tying trick I picked up somewhere along the way. Also, I’m just really pleased that the solution came from trying to reverse engineer the relevant graph-theory search terms. Maybe that’s the actual takeaway here.

0 views

Bidirectional Instance Contexts

Just a quick one today, but I wanted to point out a little trick you can do with Haskell’s typeclass inference. Imagine we have some little class, the details of which matter not in the least: We can give some instances of this type: Regular, everyday stuff. But the instances for type constructors are more interesting, because they come with an instance context: Then, of course, if we know both and , we can infer . To make this fact overwhelmingly explicit, we can reify the usual constraint-solving logic by using the type, and thus the following program will typecheck: Perhaps tipped off by the name here, the gentle reader is asked to notice the asymmetry here, since the converse program will not typecheck: But why should it not typecheck? 1 Recall from the relevant instance definition that these instances must, in fact, exist: As a testament to just how good GHC is, we can support this bidirectionality via a minor tweak to the definition of class and its instances. The trick is to add an associated type family to , and to use it as a superclass constraint: Because we’ve given a default implementation of the type family, our existing simple instances work as before: with the only change required coming from the type constructor instances: or, if we you want to be cute about it: By sticking into the superclass constraint, GHC knows that this dictionary is always available when you’ve got a dictionary around. And our earlier program now typechecks as expected. This is all available in a play session if you’d like to fool around with it. Rhetorical question. I don’t want to hear about orphans or overlapping instances or whatever. ↩︎

0 views

Use Monoids for Construction

There’s a common anti-pattern I see in beginner-to-intermediate Haskell programmers that I wanted to discuss today. It’s the tendency to conceptualize the creation of an object by repeated mutation. Often this takes the form of repeated insertion into an empty container, but comes up under many other guises as well. This anti-pattern isn’t particularly surprising in its prevalence; after all, if you’ve got the usual imperative brainworms, this is just how things get built. The gang of four “builder pattern” is exactly this; you can build an empty object, and setters on such a thing change the state but return the object itself. Thus, you build things by chaning together setter methods: Even if you don’t ascribe to the whole OOP design principle thing, you’re still astronomically likely to think about building data structures like this: To be more concrete, maybe instead of doodads and widgets you have s and s. Or dictionaries and key-value pairs. Or graphs and edges. Anywhere you look, you’ll probably find examples of this sort of code. Maybe you’re thinking to yourself “I’m a hairy-chested functional programmer and I scoff at patterns like these.” That might be true, but perhaps you too are guilty of writing code that looks like: Just because it’s dressed up with functional combinators doesn’t mean you’re not still writing C code. To my eye, the great promise of functional programming is its potential for conceptual clarity, and repeated mutation will always fall short of the mark. The complaint, as usual, is that repeated mutation tells you how to build something, rather than focusing on what it is you’re building. An algorithm cannot be correct in the absence of intention—after all, you must know what you’re trying to accomplish in order to know if you succeeded. What these builder patterns, for loops, and s all have in common is that they are algorithms for strategies for building something. But you’ll notice none of them come with comments. And therefore we can only ever guess at what the original author intended, based on the context of the code we’re looking at. I’m sure this all sounds like splitting hairs, but that’s because the examples so far have been extremely simple. But what about this one? which I found by grepping through for , and then mangled to remove the suggestive variable names. What does this one do? Based solely on the type we can presume it’s using that function to partition the list somehow. But how? And is it correct? We’ll never know—and the function doesn’t even come with any tests! The shift in perspective necessary here is to reconceptualize building-by-repeated-mutation as building-by-combining. Rather than chiseling out the object you want, instead find a way of gluing it together from simple, obviously-correct pieces. The notion of “combining together” should evoke in you a cozy warm fuzzy feeling. Much like being in a secret pillow form. You must come to be one with the monoid. Once you have come to embrace monoids, you will have found inner programming happiness. Monoids are a sacred, safe place, at the fantastic intersection of “overwhelming powerful” and yet “hard to get wrong.” As an amazingly fast recap, a monoid is a collection of three things: some type , some value of that type , and binary operation over that type , subject to a bunch of laws: which is to say, does nothing and doesn’t care where you stick the parentheses. If you’re going to memorize any two particular examples of monoids, it had better be these two: The first says that lists form a monoid under the empty list and concatenation. The second says that products preserve monoids. The list monoid instance is responsible for the semantics of the ordered, “sequency” data structures. That is, if I have some sequential flavor of data structure, its monoid instance should probably satisfy the equation . Sequency data structures are things like lists, vectors, queues, deques, that sort of thing. Data structures where, when you combine them, you assume there is no overlap. The second monoid instance here, over products, is responsible for pretty much all the other data structures. The first thing we can do with it is remember that functions are just really, really big product types, with one “slot” for every value in the domain. We can show an isomorphism between pairs and functions out of booleans, for example: and under this isomorphism, we should thereby expect the instance to agree with . If you generalize this out, you get the following instance: which combines values in the codomain monoidally. We can show the equivalence between this monoid instance and our original product preservation: which is a little proof that our function monoid agrees with the preservation-of-products monoid. The same argument works for any type in the domain of the function, but showing it generically is challenging. Anyway, I digresss. The reason to memorize this instance is that it’s the monoid instance that every data structure is trying to be. Recall that almost all data structures are merely different encodings of functions, designed to make some operations more efficient than they would otherwise be. Don’t believe me? A is an encoding of the function optimized to efficiently query which values map to something. That is to say, it’s a sparse representation of a function. What does all of this look like in practice? Stuff like worrying about is surely programming-in-the-small, which is worth knowing, but isn’t the sort of thing that turns the tides of a successful application. The reason I’ve been harping on about the function and product monoids is that they are compositional. The uninformed programmer will be surprised by just far one can get by composing these things. At work, we need to reduce a tree (+ nonlocal references) into an honest-to-goodness graph. While we’re doing it, we need to collect certain nodes. And the tree has a few constructors which semantically change the scope of their subtrees, so we need to preserve that information as well. It’s actually quite the exercise to sketch out an algorithm that will accomplish all of these goals when you’re thinking about explicit mutation. Our initial attempts at implementing this were clumsy. We’d fold the tree into a graph, adding fake nodes for the construcotrs. Then we’d filter all the nodes in the graph, trying to find the ones we needed to collect. Then we’d do a graph traversal from the root, trying to find these nodes, and propagating their information downstream. Rather amazingly, this implementation kinda sorta worked! But it was slow, and took \(O(10k)\) SLOC to implement. The insight here is that everything we needed to collect was monoidal: where the stanza gives us the semigroup and monoid instances that we’d expect from being the product of a bunch of other monoids. And now for the coup de grace : we hook everything up with the monad. is a chronically slept-on type, because most people seem to think it’s useful only for logging, and, underwhelming at doing logging compared to a real logger type. But the charm is in the details: is a monad whenever is a monoid , which makes it the perfect monad for solving data-structure-creation problems like the one we’ve got in mind. Such a thing gives rise to a few helper functions: each of which is responsible for adding a little piece to the final solution. Our algorithm is thus a function of the type: which traverses the , recursing with a different whenever it comes across a constructor, and calling our helper functions as it goes. At each step of the way, the only thing it needs to return is the root of the section of the graph it just built, which recursing calls can use to break up the problem into inductive pieces. This new implementation is roughly 20x smaller, coming in at @O (500)@ SLOC, and was free of all the bugs we’d been dilligently trying to squash under the previous implementation. Chalk it down to another win for induction!

0 views

A New Perspective on Lenses

I’ve always considered lenses to be a bit uncomfortable. While they’re occasionally useful for doing deeply nested record updates, they often seem to be more trouble than they’re worth. There’s a temptation in the novice programmer, to and their way to a solution that is much more naturally written merely as . And don’t get me started about the stateful operators like and their friends. Many programs which can be more naturally written functionally accidentally end up being imperative due to somebody finding a weird lens combinator and trying to use it in anger. Much like a serious drug collection, the tendency is to push it as far as you can. Thus, my response has usually been one of pushback and moderation. I don’t avoid lenses at all costs, but I do try to limit myself to the prime types ( , , ), and to the boring combinators ( , , ). I feel like these give me most of the benefits of lenses, without sending me tumbling down the rabbit hole. All of this is to say that my grokkage of lenses has always been one of generalized injections and projections , for a rather shallow definition of “generalized”. That is, I’ve grown accustomed to thinking about lenses as getter/setter pairs for data structures—eg, I’ve got a big product type and I want to pull a smaller piece out of it, or modify a smaller piece in a larger structure. I think about prisms as the dual structure over coproducts—“generalized” injecting and pattern matching. And this is all true; but I’ve been missing the forest for the trees on this one. That’s not to say that I want to write lensier code, but that I should be taking the “generalized” part much more seriously. The big theme of my intellectual development over the last few years has been thinking about abstractions as shared vocabularies. Monoids are not inherently interesting; they’re interesting because of how they let you quotient seemingly-unrelated problems by their monoidal structure. Applicatives are cool because once you’ve grokked them, you begin to see them everywhere. Anywhere you’ve got conceptually-parallel, data-independent computations, you’ve got an applicative lurking somewhere under the surface (even if it happens to be merely the applicative.) I’ve had a similar insight about lenses, and that’s what I wanted to write about today. At work, I’ve been thinking a lot about compilers and memory layout lately. I won’t get into the specifics of why, but we can come up with an inspired example. Imagine we’d like to use Haskell to write a little eDSL that we will use to generate x86 machine code. The trick of course, is that we’re writing Haskell in order to not write machine code. So the goal is to design high-level combinators in Haskell that express our intent, while simultaneously generating machine code that faithfully implements the intention. One particularly desirable feature about eDSLs is that they allow us to reuse Haskell’s type system. Thus, imagine we have some type: Notice that the parameter here is entirely phantom; it serves only to annotate the type of the value produced by executing . For today’s purpose, we’ll ignore all the details about calling conventions and register layout and what not; let’s just assume a corresponds to a computation that leaves a value (or pointer) to something of type in a well-known place, whether that be the top of the stack, or or something. It doesn’t matter! Since the type parameter to is phantom, we need to think about what role it should have. Keeping it at would be disastrous, since this type isn’t used by Haskell , but it is certainly used to ensure our program is correct. Similarly, seems wrong, since is meaningful only when thinking about Haskell; which this thing decidedly is not. Thus, our only other option is: Frustratingly, due to very similar reasoning, cannot be a functor, because there’s no way 1 to lift an arbitrary Haskell function into a corresponding function . If there were, we’d be in the clear! But alas, we are not. All of the above is to say that we are reusing Haskell’s type system , but not its values . An expression of type has absolutely no relation to the values or —except that we could write, by hand, a function which happened to do the right thing. It is tempting, however, to make new Haskell types in order to help constrain the assembly code we end up writing. For example, maybe we want to write a DSP for efficiently decoding audio. We can use Haskell’s types to organize our thoughts and prevent ourselves from making any stupid mistakes: We now have a nice interface in our eDSL to guide end-users along the blessed path of signal decoding. We have documented what we are trying to do, and how it can be used once it’s implemented. But due to our phantom, yet , parameter to , this is all just make believe. There is absolutely no correlation between what we’ve written down and how we can use it. The problem arises when we go to implement . We’ll need to know what state we’re in, which means we’ll need some function: In a world where is a functor, this is implemented trivially as . But is not a functor! Alas! Woe! What ever can we do? Lenses, my guy! Recall that is phantom in its argument, even if we use roles to restrict that fact. This means we can implement a safe-ish version of , that only fiddles with the paramater of our phantom type: Judicious use of allows us to switch between a value’s type and its in-memory representation. For example, given a type: we can reinterpret a as a sequence of bytes: which says we are considering our to be laid out in memory like: Of course, this is a completely unsafe transformation, as far as the Haskell type system is aware. We’re in the wild west out here, well past any type theoretical life buoys. We’d better be right that this coercion is sound. But assuming this is in fact the in-memory representation of a , we are well justified in this transformation. Notice the phrasing of our above. It is not an iso between and , but between s of such things. This witnesses the fact that it is not true in the Haskell embedding, merely in our domain. Of course, isos are like the least exciting optics, so let’s see what other neat things we can do. Imagine we have some primitives: which we can envision as Haskell bindings to the pseudo-C functions: We can use and to give a into : and finally, we can give an implementation of the desired above: Such a lens acts exactly as a record selector would, in that it allows us to , , and a inside of a . But recall that is just a list of instructions we eventually want the machine to run. We’re using the shared vocabulary of lenses to emit machine code! What looks like using a data structure to us when viewed through the Haskell perspective, is instead invoking an assembler. Once the idea sinks in, you’ll start seeing all sorts of cool things you can do with optics to generate code. s generalize running initializer code. A over can be implemented as a loop. And since all the sizes are known statically, if you’re feeling plucky, you can decide to unroll the loop right there in the lens. Outside of the context of , the realization that optics are this general is still doing my head in. Something I love about working in Haskell is that I’m still regularly having my mind blown, even after a decade. Short of compiling to categories via something like categorifier . ↩︎

0 views

Read the Code, Not the Profile

At work a few weeks back, I found myself digging into profile reports, trying to determine why our program was running so slowly. Despite having the extremely obvious-in-retrospect data in front of me, I wasted a lot of time speeding up code that turned out to not move the needle at all. Although perhaps it will be interesting only to future me, I thought it would be a good exercise to write up the experience—if only so I learn the lesson about how to read profiles and not make the same mistake again. I’m currently employed to work on a compiler. The performance has never been stellar, in that we were usually seeing about 5s to compile programs, even trivially small ones consisting of less than a hundred instructions. It was painful, but not that painful, since the test suite still finished in a minute or two. It was a good opportunity to get a coffee. I always assumed that the time penalties we were seeing were constant factors; perhaps it took a second or two to connect to Z3 or something like that. But then we started unrolling loops, which turned trivially small programs into merely small programs, and our performance ballooned. Now we were looking at 45s for some of our tests! Uh oh! That’s no longer in the real of constant factors, and it was clear that something asymptotically was wrong. So I fired up GHC with the trusty old flag, and ran the test suite in mode, which instruments the program with all sorts of profiling goodies. After a few minutes, the test suite completed, and left a file laying around in the current directory. You can inspect such things by hand, but tools like profiteur make the experience much nicer. Without further ado, here’s what our profile looked like: Well, that’s not very helpful. Of course takes 100% of the time. So I expanded that, and saw: No clearer. Opening up : OH MY GOD. JUST TELL ME SOMETHING ALREADY. Fast forwarding for quite a while, I opened up the entire stack until I got to something that didn’t take 100% of the program’s runtime: Now we’re in business. I dutifully dug into , the transforms, and . I cached some things, used better data structures, stopped appending lists, you know, the usual Haskell tricks. My work was rewarded, in that I managed to shave 80% off the runtime of our program. A few months later, we wrote a bigger program and fed it to the compiler. This one didn’t stop compiling. We left it overnight. Uh oh. Turns out I hadn’t fixed the problem. I’d only papered over it. So what went wrong here? Quite a lot, in fact! And worse, I had all of the information all along, but managed to misinterpret it at several steps of the process. Unwinding the story stack, the most salient aspect of having not solved the problem was reducing the runtime by only 80%. Dramatic percentages feel like amazing improvements, but that’s because human brains are poorly designed for building software. In the real world, big percentages are fantastic. In software, they are linear improvements. That is to say that a percentage-based improvement is \(O(n)\) faster in the best case. My efforts improved our runtime from 45s to 9s. Which feels great, but the real problem is that this program is measured in seconds at all. It’s more informative to think in terms of orders of magnitude. Taking 45s on a ~3GHz processor is on the order of 10 11 instructions, while 9s is 10 10 . How the hell is it taking us TEN BILLION instructions to compile a dinky little program? That’s the real problem. Improving things from one hundred billion down to ten billion is no longer very impressive at all. To get a sense of the scale here, even if we spent 1M cycles (which feels conservatively expensive) for each instruction we wanted to compile, we should still be looking at < 0.1s. Somehow we are over 1000x worse than that. So that’s one mistake I made: being impressed by extremely marginal improvements. Bad Sandy. The other mistake came from my interpretation of the profile. As a quick pop quiz, scroll back up to the profile and see if you can spot where the problem is. After expanding a few obviously-not-the-problem call centers that each were 100% of the runtime, I turned my brain off and opened all of the 100% nodes. But in doing so, I accidentally breezed past the real problem. The real problem is either that takes 100% of the time of the test, or that takes 100% of compiling the program. Why’s that? Because unlike and co, does more work than just compiling the program. It also does non-trivial IO to produce debugging outputs, and property checks the resulting programs. Similarly for , which does a great deal more than . This is somewhat of a philosophical enlightenment. The program execution hasn’t changed at all, but our perspective has. Rather than micro-optimizing the code that is running, this new perspective suggests we should focus our effort on determining why that code is running in the first place. Digging through made it very obvious the problem was an algorithmic one—we were running an unbounded loop that terminated on convergence, where each step it took @O (n^2)@ work to make a single step. When I stopped to actually read the code, the problem was immediate, and the solution obvious. The lesson? Don’t read the profile. Read the code. Use the profile to focus your attention.

0 views

Jujutsu Strategies

Today I want to talk about jujutsu , aka , which describes itself as being “a Git-compatible VCS that is both simple and powerful”. This is selling itself short. Picking up has been the best change I’ve made to my developer workflow in over a decade. Before , I was your ordinary git user. I did things on Github and knew a handful of git commands. Sometimes I did cherry picks. Very occasionally I’d do a non-trivial rebase, but I had learned to stay away from that unless necessary, because rebasing things was a perfect way of fucking up the git repo. And then, God forbid, I’d have to re-learn about the reflog and try to unhose myself. You know. Just everyday git stuff. What I hadn’t realized until picking up was just how awful the whole git experience is. Like, everything about it sucks. With git, you need to pick a branch name for your feature before you’ve made the feature. What if while doing the work you come up with a better understanding of the problem? With git, you can stack PRs, but if you do, you’d better hope the reviewers don’t want any non-trivial changes in the first PR, or else you’ll be playing commit tag, trying to make sure all of your branches agree on the state of the world. With git, you can do an interactive rebase and move things relative to a merge commit, but you’d better make sure you know how works, or else you’re going to spend the next several hours resolving the same conflicts across every single commit from the merge. We all know our commit history should tell the story of how our code has evolved. But with git, we all feel a little bit ashamed that our commit histories don’t , because doing so requires a huge amount of extra work after the fact, and means you’ll probably run into all of the problems listed above. Somehow, that’s just the state of the world that we all take for granted. Version control Stockholm syndrome. Git sucks. And jujutsu is the answer. The first half of this post is an amuse bouche to pique your interest, and hopefully convince you to give a go. You won’t regret it. The second half is on effective strategies I’ve found for using in my day to day job. In git, the default unit of work is a “commit.” In , it’s a “change.” In practice, the two are interchangeable. The difference is all in the perspective. A commit is a unit of work that you’ve committed to the git log. And having done that, you’re committed to it. If that unit of work turns out to not have been the entire story (and it rarely is), you must make another commit on top that fixes the problem. The only choice you have is whether or not you want to squash rebase it on top of the original change. A change, on the other hand, is just a unit of work. If you want, you can pretend it’s a commit. But the difference is that you can always go back and edit it. At any time. When you’re done, automatically rebases all subsequent changes on top of it. It’s amazing, and makes you feel like a time traveler. Let’s take a real example from my day job. At work, I’m currently finessing a giant refactor, which involves reverse engineering what the code currently does, making a generic interface for that operation, pulling apart the inline code into instances of that interface, and then rewriting the original callsite against the interface. After an honest day’s work, my looked something like this: This is the version of the . On the left, we see a (linear) ascii tree of changes, with the most recent being at the top. The current change, marked with has id and description . I’m now ready to add a new change, which I can do via : I then went on my merry way, rewriting the second callsite. And then, suddenly, out of nowhere, DISASTER. While working on the second callsite, I realized my original abstraction didn’t actually help at callsite 2. I had gotten the interface wrong. In git land, situations like these are hard. Do you just add a new commit, changing the interface, and hope your coworkers don’t notice lest they look down on you? Or do you do a rebase? Or do you just abandon the branch entirely, and hope that you can cherry pick the intermediary commits. In , you just go fix the change via : and then you update your interface before jumping back ( ) to get the job done. Honestly, time traveler stuff. Of course, sometimes doing this results in a conflict, but is happy to just keep the conflict markers around for you. It’s much, much less traumatic than in git. Branches play a much diminished role in . Changes don’t need to be associated to any branch, which means you’re usually working in what git calls a detached head state. This probably makes you nervous if you’ve still got the git Stockholm syndrome, but it’s not a big deal in . In , the only reason you need branches is to ship code off to your git-loving colleagues. Because changes don’t need to be associated to a branch, this allows for workflows that git might consider “unnatural,” or at least unwieldy. For example, I’ll often just do a bunch of work (rewriting history as I go), and figure out how to split it into PRs after the fact. Once I’m ~ten changes away from an obvious stopping point, I’ll go back, mark one of the change as the head of a branch , and then continue on my way. This marks change as the head of a branch , but this action doesn’t otherwise have any significance to ; my change tree hasn’t changed in the slightest. It now looks like this: where the only difference is the line . Now when sends this off to git, the branch will have one commit for each change in . If my colleagues ask for changes during code review, I just add the change somewhere in my change tree, and it automatically propagates downstream to the changes that will be in my next PR. No more cherry picking. No more inter-branch merge commits. I use the same workflow I would in that I would if there weren’t a PR in progress. It just works. It’s amazing. The use and abuse of the dev branch pattern , makes a great argument for a particular git workflow in which you have all of your branches based on a local branch. Inside of this branch, you make any changes relevant to your local developer experience, where you change default configuration options, or add extra logging, or whatever. The idea is that you want to keep all of your private changes somewhere organized, but not have to worry about those changes accidentally ending up in your PRs. I’ve never actually used this in a git workflow, but it makes even more sense in a repository. At time of writing, my change tree at work looks something like the following: Here you can see I’ve got quite a few things on the go! , and are all branched directly off of , which correspond to PRs I currently have waiting for review. and are stacked changes, waiting on to land. The ascii tree here is worth its weight in gold in keeping track of where all my changes are. You’ll notice that my branch is labeled as , which is to say it’s a change with no diff. But even so, I’ve found it immensely helpful to keep around. Because when my coworkers’ changes land in , I need only rebase on top of the new changes to , and will do the rest. Let’s say now has conflicts. I can just go and edit to fix the conflicts, and that fix will be propagated to and !!!! YOU JUST FIX THE CONFLICT ONCE, FOR ALL OF YOUR PULL REQUESTS. IT’S ACTUALLY AMAZING. In , sets of changes are first class objects, known (somewhat surprisingly) as revsets. Revsets are created algebraically by way of a little, purely functional language that manipulates sets. The id of any change is a singleton revset. We can take the union of two revsets with , and the intersection with . We can take the complement of a revset via . We can get descendants of a revset via , and its ancestors in the obvious way. Revsets took me a little work to wrap my head around, but it’s been well worth the investment. Yesterday I somehow borked my change (????), so I just made , and then reparented the immediate children of over to in one go. You can get the children of a revset via , so this was done via . Stuff like that is kinda neat, but the best use of revsets in my opinion is to customize the experience in exactly the right way for you. For example, I do a lot of stacked PRs, and I want my to reflect that. So my default revset for only shows me the changes that are in my “current PR”. It’s a bit hard to explain, but it works like an accordion. I mark my PRs with branches, and my revset will only show me the changes from the most immediate ancestral branch to the most immediate descendant branch. That is, my log acts as an accordion, and collapses any changes that are not part of the PR I’m currently looking at. But, it’s helpful to keep track of where I am in the bigger change tree, so my default revset will also show me how my PR is related to all of my other PRs. The tree we looked at earlier is in fact the closed version of this accordion. When you change to be inside of one of the PRs, it immediately expands to give you all of the local context, without sacrificing how it fits into the larger whole: The coolest part about the revset UI is that you can make your own named revsets, by adding them as aliases to . Here’s the definition of my accordioning revset: You can see from that we always show (the current edit), all of the named bases (currently just , but you might want to add ), and all of the named branches. It then shows everything from to , which is to say, the changes in the branch leading up to , as well as everything from to the beginning of the next (stacked) branch. Finally, we show all the leafs of the change tree downstream of , which is nice when you haven’t yet done enough work to consider sending off a PR. Jujutsu is absolutely amazing, and is well worth the four hours of your life it will take you to pick up. If you’re looking for some more introductory material, look at jj init and Steve’s jujutsu tutorial

0 views

FRP in Yampa: Part 4: Routing

In the last post , we investigated the combinator, and saw how it can give us the ability to work with “state machine”-sorts of things in our functionally reactive programs. Today we turn our attention towards game objects—that is, independently operating entities inside of the game, capable of behaving on their own and communicating with one another. I originally learned of this technique from the paper The Yampa Arcade , but haven’t looked at it in a few years, so any shortcomings here are my own. Nevertheless, the material presented here does in fact work—I’ve actually shipped a game using this exact technique! Before we dive into the Yampa, it’s worth taking some time to think about what it is we’re actually trying to accomplish. There are a series of constraints necessary to get everything working, and we’ll learn a lot about the problem domain by solving those constraints simultaneously. The problem: we’d like several s running around, which we’d like to program independently, but which behave compositionally. There are going to be a lot of moving pieces here—not only in our game, but also in our solution—so let’s take a moment to define a type synonym for ourselves: Of course, we haven’t yet defined or , but that’s OK! They will be subject to a boatload of constraints, so we’ll sort them out as we go. At the very least, we will need the ability for an to render itself, so we can add a field: We would like s to be able to interact with one another. The usual functional approach to this problem is to use message passing—that is, s can send values of some message type to one another. Those messages could be things like “I shot you!” or “teleport to me,” or any sort of crazy game-specific behavior you’d like. In order to do this, we’ll need some sort of for each . The exact structure of this type depends on your game. For the purposes of this post we’ll leave the thing abstract: We’ll also need a type, which again we leave abstract: Sending messages is clearly an output of the , so we will add them to : There are actions we’d like to perform in the world which are not messages we want to send to anyone; particularly things like “kill my ” or “start a new .” These two are particularly important, but you could imagine updating global game state or something else here. Commands are also outputs: Finally, it’s often helpful to have some common pieces of state that belong to all s—things like their current position, and hot boxes, and anything else that might make sense to track in your game. We’ll leave this abstract: Let’s turn our attention now to the input side. It’s pretty clear we’re going to want incoming messages, and our current state: What’s more interesting, however, than knowing our own state is knowing everyone’s state. Once we have that, we can re-derive if we know our own . Thus, instead: Armed with our input and output types, we need now figure out how to implement any of this. The relevant combinator is Yampa’s , with the ridiculous type: Yes, there are five type variables here (six, if you include the rank-2 type.) In order, they are: Big scary types like these are an excellent opportunity to turn on , and explicitly fill out the type parameters. From our work earlier, we know the types of and —they ought to be and : It’s a little clearer what’s going on here. We can split it up by its four parameters: We are left with a few decisions, the big ones are: what should be, and what should be? My answer for the first is: which not only conveniently associates names with their corresponding objects and states, but also keeps track of the messages which haven’t yet been delivered. We’ll investigate this further momentarily. For maximum switching power, we can therefore make our event type be . Filling all the types in, we get: which is something that feels almost reasonable. Let’s write a function that calls at these types. Thankfully, we can immediately fill in two of these parameters: We are left with two holes: one which constructs s, the other which destructs s. The first is simple enough: Writing is a little more work—we need to accumulate every change that might want to enact: There’s quite a lot going on here. Rather than dealing with directly, we instead work with which gives us a nice monoid for combining endomorphisms. Then by exploiting and , we can split up all of the work of building the total transformation into pieces. Then handles sending a message from one object to another, while also transforms each into an endomap. We can tie everything together: Notice that we’ve again done the monoid trick to run on every output in the . If you’re not already on the monoid bandwagon, hopefully this point will help to change your mind about that! So our router is finally done! Except not quite. For some reason I don’t understand, is capable of immediately switching if the you generate for immediately fires. This makes sense, but means Yampa will happily get itself into an infinite loop. The solution is to delay the event by an infinitesimal amount: There’s probably a more elegant solution to this problem, and if you know it, please do get in touch! Today we saw how to use the combinator in order to build a router capable of managing independent objects, implementing message passing between them in the process. You should now have enough knowledge of Yampa to get real tasks done, although if I’m feeling inspired, I might write one more post on integrating a Yampa stream into your function, and doing all the annoying boilerplate like setting up a game window. Maybe! Watch this space for updates!

0 views

FRP in Yampa: Part 3: Switching

Yesterday we looked at arrowized FRP in Yampa, and saw how it the notation is to arrows as is for monads. While these syntaxes don’t give you any new power, notation nevertheless matters and helps us better structure our programs. So far all of our programs have consisted of a single signal function. We’ve sketched out how to build a lobotomized version of the Snake game, but real games have things like title screens and option menus as well as the actual gameplay component. If you were determined, you could probably figure out how to build these missing components with what we’ve seen so far, but it wouldn’t be fun. Instead, we turn our attention to switches. Yampa’s type isn’t monadic, but the combinator gets you surprisingly close: The idea is that you run the first until the outputted produces an event, at which point you take its value and use it to generate a new , which you subsequently run. As an example, let’s build a little coproduct type for the choices we might make on the menu screen: Our menu screen is now an that outputs the things we’d like to draw on the screen (a ), as well as an corresponding to an event for when we actually make a selection: As before, we have our main Snake game, and now a new screen for the options: We can tie it all together by ing from to the appropriate next : Again, you can kind of squint to get the picture, but things get a little gnarlier when you actually get into the gritty details here. For example, in a real game, you might go back to the menu screen after the game ends, and you’d certainly go back after setting up the appropriate options. If we wanted to encode those rules, we’d need to fiddle with some types. Let’s add s to and , corresponding to when the player has died and when the options have been set, respectively: With a creative amount of ing, it’s possible to encode everything we’d like: Of course, we can use for much more than just modeling state machines—the following example uses it as a combinator to do something for a while: or, more interestingly, a combinator which interpolates a function: The parameter here will be called with values of time from to , linearly increasing until . This is the sort of combinator that is extremely useful for animating objects, where you’d like to tween from a known starting point to a know ending point. Most of what I know about Yampa I learned by reverse-engineering Alex Stuart ’s excellent game Peoplemon ( source here ). As you might expect, it’s a fun parody on Pokemon. One night while desperately trying to work out how he programmed up the menu-based battle system in Peoplemon, I came across the mysteriously named Lightarrow.hs , which makes the following improvement over the ing technique above. He sticks the whole thing into the monad: I think this is the first and only time I’ve seen a use for in the wild, that doesn’t stem directly from trying to CPS everything in order to make your program go faster from fusion. It’s so COOL to see a real world opportunity to throw at a problem! Anyway. This type is known as , which I’ve always assumed was something like “signal continuation” but your guess is as good as mine: We can lift any into a via : and we can lower the whole thing again by way of : What’s really nice about is that it is a genuine, bona-fide monad. This gives us a really lovely notation for programming sequential things like state machines or battle animations—stuff that consists of needing to switch between disparate things with discrete reasons to change. We can use to encode our above state machine in a much more familiar way: Not bad at all! Today we looked at Yampa’s combinator, seen how it can be used to string disparate signals together, and seen how wrapping the whole thing in a continuation monad can make the whole thing tolerable to work with. In tomorrow’s post, we’ll look at writing object routers in Yampa—essentially, the main data structure for tracking lots of game objects, and allowing them to communicate with one another. Until then, I hope you’re having a very special Christmas weekend.

0 views

FRP in Yampa: Part 2: Arrowized FRP

In the last part , we got a feel for how FRP can help us with real-time programming tasks, especially when contrasted against implicit models of time. However, the interface we looked at yesterday left much to be desired—stringing together long signal functions felt clunky, and since s don’t form a monad, we couldn’t alleviate the problem with do-notation. So today we’ll look at one of Haskell’s lesser-known features—arrow notation—and learn how it can help structure bigger reactive programs. What an awful, overloaded word we’ve found ourselves with. Being Haskell programmers, we’re all very familiar with the everyday function arrow , which you should think of as a special case of a more general notion of arrow. Notice how both function arrows ( ) and signal functions ( ) have two type parameters—one for the input side of things, and another for the output side. And indeed, we should think of these as sides of the computation, where we are transforming an into an . For our purposes today, we’ll want to be very precise when we differentiate between functions-as-data and functions-as-ways-of-building things. In order to do so, we will give give ourselves a little type synonym to help differentiate: And henceforth, we will use the synonym to refer to functions we’re manipulating, reserving to talk about combinators for building those functions. For example, our favorite identity function is a : We usually give the constant function the type , but my claim is that it ought to be: The subtle thing I’m trying to point out is that there is a (conceptual) difference between the functions we want to operate on at runtime (called s), and the combinators we use to build those functions (called .) Like I said, it’s a bit hard to point to in Haskell, because one of the great successes of functional programming has been to blur this distinction. Anyway, let’s return to our discussion of arrows. Both functions and s admit a notion of composition, which allow us to line up the output of one arrow with the input of another, fusing the two into a single computation. The types they have are: Despite our intimate familiarity with functions, this pattern of types with both an input and an output is quite uncommon in Haskell. Due to the immense mindshare that the monad meme takes up, we usually think about computation in terms of monads, and it can be hard to remember that not all computation is monadic (nor applicative.) Monadic values are of the shape , with only a single type parameter that corresponds (roughly) with the output of the computation. That is to say, all of the interesting computational structure of a monad exists only in its output, and never in its input —in fact, we can’t even talk about the input to a monad. What we do instead is cheat; we take the input side of the computation directly from the function arrow. If we expand out the types of and , using our notation from above, they get the types: which makes it much clearer that the relevant interactions here are some sort of distributivity of our monad over the regular, everyday function arrows. In other words, that monads are cheating by getting their “inputs” from functions. Enough philosophy. What the hell are arrows? The example that really made it stick for me is in the domain of digital circuits. A digital circuit is some piece of silicon with wire glued to it, that moves electrons from one side to the other—with the trick being that the eventual endpoint of the electrons depends on their original positions. With enough squinting, you can see the whole thing as a type , where corresponds to which wires we chose to put a high voltage on, and is which wires have a high voltage at the end of the computation. With a little more squinting, it’s not too hard to reconceptualize these wires as bits, which we can again reconceptualize as encodings of particular types. The point I was trying to make earlier about the distinction between and makes much more sense in this context; just replace with . Here it makes much more sense to think about the identity circuit: which is probably just a bundle of wires, and the constant circuit: which lets you pick some particular value (at design time), and then make a circuit that is disconnected from its input wires and merely holds the chosen value over its output wires. Anyway. The important thing about digital circuits is that you have infinite flexibility when you are designing them, but once they’re manufactured, they stay that way. If you chose to wire the frobulator directly to the zanzigurgulator, those two components are, and always will be, wired together. In perpetuity. Of course, you can do some amount of dynamic reconfiguring of a circuit, by conditionally choosing which wires you consider to be “relevant” right now, but those wires are going to have signals on them whether you’re interested in them or not. In other words, there is a strict phase distinction between the components on the board and the data they carry at runtime. And this is what arrows are all about. Arrows are about computations whose internal structure must remain constant. You’ve got all the flexibility in the world when you’re designing them, but you can’t reconfigure anything at runtime. Yesterday’s post ended with the following code, written directly with the arrow combinators. While technically you can get anything done in this style, it’s a lot like writing all of your monadic code directly in terms of . Possible certainly, but indisputably clunky. Instead, let’s rewrite it with arrow notation: Much tidier, no? Reading arrow notation takes a little getting used to, but there are really only two things you need to understand. The first is that introduces an arrow computation, much like the keyword introduces a monadic computation. Here, the input to the entire arrow is bound to , but you can put any legal Haskell pattern you want there. The other thing to know about arrow notation is that and are two halves of the same syntax. The notation here is: where is of type , and is any normal everyday Haskell value of type . At the end of the day, you bind the result to , whose type is obviously . The mnemonic for this whole thing is that you’re shooting an arrow (of bow and arrow fame) from the input to the output. And the name of the arrow is written on the shaft. It makes more sense if you play around with the whitespace: More importantly, the name of that arrow can be any valid Haskell expression, including one with infix operators. Thus, we should parse: What’s likely to bite you as you get familiar with arrow notation is that the computations (the bits between and ) exist in a completely different phase / namespace than the inputs and outputs. That means the following program is illegal: because simply isn’t in scope in the expression . It’s the equivalent of designing a circuit board with capacitors on it, where will be determined by an input voltage supplied by the end-user. Completely nonsensical! That’s all for today, folks. The day caught me by surprise, so we’ll be back tomorrow to talk about building state machines in Yampa—something extremely important for making real video games.

0 views