Posts in Movies (20 found)
ava's blog 4 days ago

how i enjoy movies

I'm not much of a movie watcher. I somehow prefer watching multiple episodes of a TV show over a few hours over investing 2 hours into a movie. I get antsy in the second half of the movie and episodic stuff can more easily be paused for a break. My wife has gotten me into more movies the past few years though, especially the recent months. Catching up on classics like all the Star Wars movies, Lord of the Rings 1-3, American Psycho, Fight Club, some popular Studio Ghibli movies, some old genre-defining horror movies, and more. What makes movies a lot more bearable to me is talking about them while watching them, even pausing the movie while discussing. I know many people hate this and just want to watch something in peace, not tear it apart during or even be interrupted. Understandably, they don't want the fantasy and make-believe to be destroyed during. But my wife and I are on the same wavelength about this. She is my favorite person to watch movies with because of this. It would bore me to death to sit through 2+ hours in silence, just staring, and then both of us moving on from it and just saying "Yeah it was good.". I need to have some breaks to readjust my position, get something from the kitchen, drink some water, and have minutes in-between just psychoanalyzing characters, giving our interpretations of things that are still unclear, or saying what we would do if we were the characters. Also discussing the broader context, production, if something was real or CGI... I love it. It keeps me engaged, and it makes the movie more memorable for me. I also learn so much more about it and plot details I would have otherwise missed get revealed to me. I especially love watching something with my wife when it's something she is really interested in or has seen multiple times. Last night, we watched an Indiana Jones movie ( Raiders of the Lost Ark ), and I got so much info from her during it. "Harrison Ford improvised this scene because he was tired of reshooting it all the time." "In this scene you can spot C3PO and R2-D2 in the background. And you can see the Ark in the background of a Clone Wars episode." "I think this shot is actually a matte painting on glass." I'm more of a Lara Croft person, and so we also talked about the similarities and differences between the two, especially with Lara's reboot content and her grappling with the fact that her work tends to cause more harm than good, something Indiana doesn't seem to have to face that much. We also discussed some silly stuff; like how the snakes would realistically survive in that pit, and whether a bunch of snakes are flammable or not. All while watching it and occasionally pausing. Technically, we also do this for TV shows. Severance and Pluribus especially, but even X-Files . It's just so good! I just need to engage with someone about what I'm seeing and pick their brain about an aspect of it. Acknowledging something was produced, these were all actors, this didn't really happen, this was CGI, this is a plot inconsistency etc. doesn't ruin the entertainment for us at all :) Reply via email Published 11 Apr, 2026

0 views
Justin Duke 3 weeks ago

Mistress America

I'm sorry, I know you liked Brooke. He told me that she worships you, she kept talking about how smart you are, how interesting... Last year I watched Liberal Arts , which may have been the single worst quote-unquote college movie that I've seen. Lazy, boring, and incoherent. In contrast, Mistress America nails not only being a college movie, but being a New York movie and a farce with specificity, flair, and warmth, and manages to do all of these things within the confines of a 97-minute runtime. No mean feat. I do feel like, for better and for worse, my analysis of the veracity of any of these films boils down to me coming out of the metaphorical theater thinking and then nodding my head and being like, "Yep, that's what it was like." And in Mistress America, that's what it was like. I did not have the same experience that Lola Kirke's character did. But the details were so hyper-specific and accurate, I could see so many people I knew like her from my time at William & Mary. What's more, the Greta Gerwig character serves as an equally hyper and honest depiction of that kind of late-twenties driftless coquette without ever being cruel or mean unnecessarily. Much of this is, I think, delivered on the hands of Gerwig's performance and screenwriting. Baumbach, I think, is a director who needs Gerwig. Baumbach, I think, is a director who needs Gerwig more than the other way around. The surrounding cast is all pitch-perfect, too — including the second-act Connecticut set, who once again are drawn with broad comedic brushes without feeling particularly flat or cardboard (another problem with most films in this genre.)

0 views
Justin Duke 1 months ago

Stop Making Sense

A polite man is driven to murder. He becomes a prophet and screams manifestos on love, war, and the increasingly alarming impact of technology and progress. Driven to insanity by his own insights into the human condition, he travels to a river in an attempt to drown himself but instead is baptized and absolved of sin. He dies, crosseyed yet painless. This is the definitive fairytale of my generation, and the moral is "watch out, you might get what you're after". Jesus lives, and he's wearing a giant suit. A film that is so flatly and universally beloved by all who watch it, regardless of affiliation with the band itself. And truth be told, I don't really care much for the Talking Heads — not that I dislike them or their music, but to me they are one of many bands that I can recognize the artistic and aesthetic value in at an intellectual level more than a Dionysian level. (And I don't really prefer my listening to be pleasurable on the intellectual level.) What did I think about while watching this excellent film, a master of its genre? I thought about the greatest concerts in my life: Lost in the Trees playing in the tea house in Charlottesville, an equal number of band members and audience members; Blind Pilot playing in the Crystal Ballroom, an entirely acoustic set and an audience willing enough to go along with it; CHVRCHES at the Paramount in Seattle, sweaty and glowlit. What Demme captures here is that same indelible feel of the best live music, where you feel in the same breath and beat both completely alone and completely surrounded by the only people who matter: building, building, higher, higher. I have half-joked with friends over the past couple years that I'm done with concerts as a medium. The event no longer holds any sort of allure outside of special occasions (once-in-a-lifetimes, family). The highest praise I can give this film is that it made me reconsider that stance.

0 views
Justin Duke 1 months ago

21 Bridges

A derivative, predictable, competent crime thriller. If you read that sentence and think "good," then you will like this film, and the opposite is true as well. The banality points to the banality of everything about this film — it seems to avoid contrivance and missteps and misfires more than it goes out of its way to court success. Boseman is wonderful, but his character is given absolutely nothing to do besides act with competence and rationality. The standout — the one character both written and portrayed with any sense of moral valence — is Taylor Kitsch as a trigger-happy dude who is both clearly insane but also cares deeply about his companion. When thinking about this movie I am drawn to a comparison with the-rip , given that I watched it so recently, and I find myself at least grateful for the economy in this film's runtime and its willingness to trust that the viewer is at least spending their time watching the film and not scrolling on their phone.

0 views
Grumpy Gamer 1 months ago

Tomorrow Never Came

Here is a movie I made with my friend, Tom, back when I was 15 or so. We were sure we’d be the next George Lucas or Steven Spielberg. Little did I know a few years later I’d be working at Lucasfilm and Steven Spielberg would call me up for hints on Monkey Island. He couldn’t use the 1-900 number like everyone else. The movie has sound, but it was lost when it was transferred to VHS and this goofy music was added. In case the Smithsonian wants to preserve the movie as historically important, here is the link.

0 views
Justin Duke 1 months ago

What Happened Was

Two of my absolute favorite films of all time, albeit for very different reasons, are My Dinner with Andre and Before Sunrise . Both of these films, which I highly encourage you to watch more than anything else I talk about if you haven't already done so, are about the enchantment and sucker of one single really interesting conversation. The two films diverge pretty heavily from there. My Dinner with Andre is a film about work, fulfillment, and status. And Before Sunrise is a film about youth in love. But the beauty in both comes from not just their simplicity and formless structure, but in the recursive nature of the dialogue, just like in real life, where a pregnant pause or a sidelong glance suddenly carries with it enormous weight after understanding not just the comment but the 75 minutes preceding it. What Happened Was is interested in that last thing too. And in the unraveling of yourself that happens when you spend time being intimate in a literal sense with anyone. But is more interested in a funhouse mirror look at the human psyche. And has perhaps more cynical and caustic things to say about the way people express themselves through others. Our dual protagonists are a paralegal and an executive assistant. Both seem a little off, but not wholly so. And then, over the course of the worst first date in the world, we watch the characters reduce themselves to mania. This is an uncomfortable film to watch. Rather than transposing yourself into Andre and his counterpart, or Jesse and his counterparty, you find yourself just kind of internally screaming on behalf of both characters who have a Lynchian sense of bizarre behavior. In terms of inspiration, this draws more from Waiting for Godot than Who's Afraid of Virginia Woolf. The dread you feel is less from a place of sadness and understanding and more from a sense of shock and increasing bewilderment. And to that extent, it flatly did not work for me quite as much as I hoped. But as in all two-part plays, the film ends with two monologues, one from each character, where they lay bare the things that at that point are almost nakedly obvious to us, the viewer. And while I can't say either monologue or scene was particularly well written, I will say that both of them will stick with me for a long, long time. (I'm not sure the preceding seventy minutes earned those monologues, but that's a point beside.)

0 views
Justin Duke 2 months ago

The Hudsucker Proxy

When does an indulgence become sour? I ask this because indulgence is the word that most immediately comes to mind at the finish of this film. And largely in a negative way. Closer to flippancy than resplendence. And yet the same word can be leveled at the last film I watched. Ocean's 12. A movie which I thought, on the merits, was not exactly good, and yet I had a good time with it. It's a weird pair of films to compare. The filmmaker is perhaps less so; it's not unreasonable to consider Soderbergh and the Coens in the same relative stratosphere, both in terms of longevity and breadth of work. HudSucker Proxy was made very early on by the Coens. Ocean's 12 was made perhaps at the peak of Soderbergh's cache. Where and for this being such an early film in their canon, the technical achievement is remarkable. This film looks and feels beautiful and striking in a way that I've described as obvious, but not unwelcome. One of the reasons why, deep down, I love the Oceans film so much is because you get the very strong sense that Soderbergh is cooking up for you the most delicious and expensive meal in the world. It is a project where he is alchemizing his pleasures and giving them to you, letting you get swept a lot in the exact same way he would be, and that is very much not the goal of The Hudsucker Proxy. Frankly, it's difficult to tell what the goal of the film is beyond technical wonderment and perhaps a skewering of lesser film. One gets the sense that the Coens are making fun rather than having it. The pastiche here runs the gamut from boardroom drama to His Girl Friday, and speedruns the list of clichés, many of which are funny in isolation. The script is, if incoherent, extremely clever, peppered with one-liners and callbacks. But... you leave the movie with a kind of unfulfillment. The Coens can both be humanistic, but I simply do not care about any single member of this group. What's worse, I'm not sure I'm supposed to.

0 views
Justin Duke 2 months ago

Ocean's Twelve

This might not be the perfect time or whatever to talk about it but I've been doing my homework and I'd really like to play a more central role this time around. I consider this film's prequel as close as you can get to a perfect film. Ocean's Eleven is a movie that knew exactly what its goal was—to be as relentlessly and easily entertaining and pleasurable as possible—and succeeds in doing so more than any other movie with similar ambitions. The sequel to such an endeavor has an inherently impossible task ahead of it. I held off on watching this for a long time. Partially because it seemed unnecessary; why would I watch a sequel when I could just watch the original again? And partially because it is poorly reviewed, in the same way many of Soderbergh's works are. The phrase self-satisfied and bizarrely sloppy comes up a lot in reviews of his early-aughts output. I think it's fair to be upset. The heist in this movie is, to a certain extent, on us , the viewer, for sitting down and thinking that we were getting treated to a heist movie when instead what Soderbergh wants to give us is two hours' time in the companionship of people who are effortlessly beautiful and charming. This film is filled with metatext: the Julia Roberts bit, Clooney's age, Matt Damon trying to become a leader. You can accuse some of the smaller bits as rehash, and I agree with the central complaint that a plot twist which invalidates everything we've seen for the preceding hour is unsatisfying. At the end of the day, I didn't care that much because I enjoyed watching my buddies having fun. It is a lesser film; it still succeeds in its goals, with grace and panache. One more thing: this movie hints at creating a slightly larger mythos, in the same way John Wick eventually created an extended universe unto itself. While part of me would have loved to see six more of these films, I think the key to their enduring charm is that they are a snapshot in wide frame. The warmest and happiest scenes involve as many people as possible, whereas the formula of John Wick really only requires a single protagonist and an endless barrage of faceless, unnamed fodder.

0 views
A Room of My Own 3 months ago

2026-2: Week Notes

This week felt like a slow, slightly awkward return to routine. I worked from home , which I’m grateful for, but with the kids home (summer holidays) and my mum visiting, it took a surprising amount of energy to focus and do anything at all. Not productive necessarily. Just not completely stagnant. I noticed how easily I slip into managing everyone’s time and behavior when I’m physically around. It also made me notice, again, where most of my mental energy actually goes outside of work. One big chunk goes into managing my food and weight (as much as I hate to admit it). The second big energy drain is navigating the kids and electronics. (I am just mentioning it here, but I plan to write about it some more later). A bright spot was spending time creating my 2026 direction. I realised I don’t really want achievement-style goals right now. I want a way of being. My central theme is “Let myself be happier.” With gentler yoga goals, I managed to do yoga every day this week (15–20 minutes). I can already feel the difference. I went for almost two weeks without it and could feel myself getting stiffer. It doesn’t take long at this age. On the fun side, I’ve been watching Dark Matter and thinking about regret and the paths we don’t take. I’ve always enjoyed Blake Crouch’s work. It’s slightly terrifying and bordering on hard sci-fi. I also discovered (and loved!) Pluribus . If you’ve watched it, do the Others remind you of ChatGPT or other GenAI? (to save from spoiling it for anyone, I won’t say why). Family movie nights were dominated by Avatar rewatches and finally seeing the latest one in the cinema last night. It’s three and a half hours long, which honestly felt offensive. I kept thinking, who does James Cameron think he is, taking that much of my life? It was beautiful and fine, but not three-and-a-half-hours good. I would have happily traded that time for three more episodes of Pluribus. That said, the kids loved it, especially my (almost sixteen year old) son. My husband had a terrible cough, so I ended up sleeping on a mattress on the floor in my daughter’s room so everyone (maybe not him) could get some sleep, especially with my mum in the guest room. It reminded me (again) how much I care about furniture being practical and multi-use. I still regret not insisting on couches you can properly sleep on. Where I come from, all couches can become beds. It just makes sense to me. I don’t like furniture that only serves one purpose, no matter how pretty it may be. This also nudged me back toward the idea of doing another round of simplifying at home, not because the house is cluttered, but because less always feels lighter to me (makes me feel lighter, I guess). I will make a plan. Maybe start in February or so. Socially, I’m moving toward my 2026 direction of hosting gatherings and bringing people together. Drinks with a neighbour, lunches with my mum and the kids, and long phone calls with friends overseas. The first gathering of neighbours for 2026 is booked for next Saturday (granted, my husband organised that one, but nevertheless). I’ve been thinking more about how many social catch-ups become pure life recaps and updates rather than shared experiences. The life itself is lived somewhere else, not inside the friendship. I’d like to experiment with hosting and gatherings that create something memorable together, not just conversation. That idea has been sitting with me. Because of that, I’m feeling more drawn to creating gatherings that have some kind of purpose or shared experience, not just conversation. I’m reading The Life Impossible by Matt Haig. I usually enjoy his books. The lessons and themes tend to be obvious, a bit like Paulo Coelho, but that’s part of the appeal and probably why they’re so popular. And also, I have no idea where this book is taking me. It’s also nice to see an older protagonist. The main character is 72. I also just finished Better Than Happiness: The True Antidote to Discontent by Gregory P. Smith, a memoir I picked up from the library intending to skim, but it fascinated me enough to read the whole thing. There were some really nice insights around acceptance, self-acceptance, anger, and learning how to actually live in the present moment. “In some ways, it’s a paradox. To change something we first have to accept it for what it is. Only through accepting my perceived flows and limitations? Could I see that there were pathways to improvement? The same applied when it came to learning to accept one of the biggest conundrums in my life, the man in the mirror. Self acceptance is the main reason I’m not only here today, but able to look at myself in the mirror.” Overall, the week felt reflective. I’m noticing how hard I still am on myself and trying to soften that. Self-acceptance! If this year really is about letting myself be happier, then noticing these small choices and energy leaks feels like the right place to start. PREVIOUS WEEK: 2026-1: Week Notes One big chunk goes into managing my food and weight (as much as I hate to admit it). The second big energy drain is navigating the kids and electronics.

0 views
Justin Duke 3 months ago

Eternity

Love isn't just one happy moment, right? It's a million. And it's bickering in the car, and supporting someone when they need it, and it's growing together, and looking after each other. It can't be denied that this movie isn't really, really funny. Some of the runners, such as the Korean War bit, or the pretzel bit, were just great laugh lines from a writing team — and I think the film's willingness and steadfastness not to engage in the minutia of the framing device and its acting mechanics was a very smart choice, because that's really not what the movie is interested in whatsoever. In general, I think this movie wa sa success and I attribute that success to the script's unwillingness to take the easy way out. I appreciated that all three vertices in our little love triangle are fairly flawed in different ways: The movie fades in quality in the few instances where it stoops to melodrama - mostly in the middle act, which any viewer is going to know beat by beat, and therefore goes on entirely too long and with way too few laugh lines. Given the audaciousness of the framing device, the movie did not quite take full advantage of its visual possibilities. The little sequence of Elizabeth Olsen gaping between eternities was legitimately cool, as long as you didn't think about it particularly hard — but the most beautiful and interesting parts of the film were in the junctions themselves, rather than the paradises. (Perhaps that is a deliberate metaphor.) The movie that comes most readily to mind, having watched Eternity, is Palm Springs : also a high-concept rom-com that never takes itself too seriously and has legitimately hilarious moments 1 And a bit of sloppiness. which, in a different world, probably could have been a massive box office success — if its goal was, at all, to land in the box office. This is a vehicle largely for Miles Teller and Elizabeth Olsen to be charming: and while they share almost zero chemistry, their individual charisma makes up for it, as does a great collection of complimentary performances from their surrounding cast. (The movie also owes a lot to The Good Place, of course, but Eternal Sunshine of the Spotless Mind which is perhaps winkingly echoed in the title.) This is not high art — nor is it pablum. I want more of these films! Elizabeth Olsen isn't given much to work with, but the text of her character has a little bit of scuminess, and she sells the pathos strongly enough. - Miles Teller's character is, for sure, guilty of everything that his rival accuses him of - in the same way that we all have a little self-interest burrowed deep in our heart. - And Callum Turner's character is clearly has some anger problems and a bit of subtextual one-dimensionality — the traits that you do ignore as a 25-year-old newlywed, but would grate on you after 65 years of marriage.

0 views
Justin Duke 3 months ago

Cameraperson

It seems fitting that, to close out the year I finally watched Koyaanisqatsi, I also got to watch Cameraperson — which is in many ways, and none of them dismissive or demeaning to either film, a funhouse mirror of its antecedent. Koyaanisqatsi is a film that's very interested in collage and rapidity, and at times felt like a sensory HIIT where you feel the push and the rest and the push and the rest, and the cavalcade of stock washes over you. Cameraperson is an antithesis — is collage, yes, assembled from around a dozen or so vignettes, all of them quiet, both literally and figuratively, but meticulously placed so that you, the viewer, are given the space and time to form the connections yourself. The dialogue in this film is spartan; the visuals are arresting and deliberate. Nothing feels wasteful. Kristen Johnson is very interested in relationships between storytelling and memory and between identity and witness. She is interested in the vastness and fragility of human existence. She does not have many answers; she wants you to help her find them. The best movies take you places: sometimes that is into someone's head, sometimes that is into a Nigerian NICU. She takes you there quietly, never flinching, never letting go of your hand. I don't know what else to say. I think it's somewhat disingenuous to call it an entertaining movie, but it's certainly an enchanting one, and I am different and better for having watched it.

0 views
Justin Duke 3 months ago

Kiss Kiss Bang Bang

This was a really fun, silly movie—one I probably watched in high school and that very well might have become my entire personality. In a way, we're all better off that it didn't. About halfway through, I found myself thinking how deeply it reminded me of The Nice Guys , and felt quietly pleased with myself for realizing that Shane Black wrote and directed both films. They share the same strengths and the same flaws: a weak third act, where the obvious avenues of pastiche run dry and the movie retreats into generic, gratuitous action-movie spectacle. But at their best, these films are vividly alive. They pull off something that's genuinely difficult for pastiche: sustaining suspense about what's serious and what isn't. The jokes land, the timing holds up, and comedy—when it's done well—ages surprisingly gracefully. There are a few spots where this isn't quite true, and the occasional dip into melodrama cheapens the experience a bit. Still, that's a hard line for any film to walk, and this one does it better than most. Robert Downey Jr., playing a caricature of his pre–Iron Man self, is entertaining if not especially novel. The real standout, though, is Val Kilmer, who threads the needle perfectly, delivering his performance with exactly the right amount of irony. The bit parts remain just that—bit parts—and Michelle Monaghan does solid work, never tipping into manic-pixie-dream-girl territory or pick-me energy. Overall, it's just a really fun time—the kind of movie I'd happily rewatch in six months. My only substantive complaint is the same one I have with most of Black's filmography: the unnecessary thirty minutes of dull, overindulgent action scenes. They add nothing. Every moment not spent letting the three leads spar and riff off one another feels like a wasted opportunity.

0 views
Justin Duke 3 months ago

Wake Up Dead Man

There are three ways to evaluate Wake Up Dead Man: as a This was a very, very entertaining two and a half hours. I did not care for glass-onion , its predecessor, and I think this recovers from all of its missteps: the humor and plot are a little tighter, the script is a little less indicative of Rian Johnson spending too much time on Twitter, and the setting, production value, and aesthetic are all immaculate. But, moreover: Josh O'Connor's performance is not just good relative to this extremely accomplished bench of players in the background (Glenn Close! Andrew Scott!), but ascendant in its own right. Father Jud is probably the most interesting character in this entire series outside of Blanc himself, and O'Connor brings him to light with a nuance and warmth missing from even the sympathetic characters in this series. Between this and Challengers last week, I suddenly have a deep and great appreciation for this burgeoning young actor. 1 Not exactly a hot take, I'm aware. I think this movie's first goal is to entertain and delight, and its second goal is to seriously engage with its motifs on faith and doubt: where it succeeds in the second, it is on the back of Father Jud. (And in particular, the construction worker scene — you know the one — is Johnson at his finest, zagging from comedy to pathos and avoiding whiplash.) As a whodunit The contract between reader and writer of a whodunit is important . The joy in this medium, especially when executed well, is a real sense that you, as the reader or viewer, could have put the pieces together yourself — the joy in consumption is active because while you're enjoying the text of the work, you also get to try and be one or two steps ahead or behind the creator. This sounds obvious, but one of the reasons why Agatha Christie is, well, Agatha Christie, is that she knew how to balance this perfectly. The average Poirot mystery had you entering the final act with a handful of suspects who you all had reasons to believe were guilty, and were rich enough in their character that it wouldn't be completely out of left field if they ended up being the culprit. The best parlor scenes, accordingly, were less about filling in the gaps and more about drawing a through line between disparate clues that you had already picked up but had not connected. And here even more so than the previous two entries Rian Johnson fails. A bellwether of a bad parlor scene is length: it takes us around 20 minutes of flashbacks to go from the reveal of whom to the conclusion of why and how, much of it muddled and incoherent. What's worse, is that the Greek chorus of guilty suspects don't get crossed off so much as they simply fade into the background — don't get me wrong, it's fun to see Kerry Washington and Andrew Scott in these bit parts, but they are given tremendously little to do, are essentially miscast, and just like in Glass Onion , they do not feel like people so much as caricatures of people whom Rian Johnson wants to write a couple jokes around. Christie's novels work because you could see just enough introspection and motivation in all of the characters—not just the obvious one or two—to keep your mind racing. No such luck here: the ensemble never coheres. As a Knives Out sequel Much better than Glass Onion; arguably as good as the original film, though that had the relative freshness of its approach to buoy it. There are very few reasons not to watch this movie, and my quibbles are small: I would love to watch a new one of these every three or four years until I die. A couple other notes: film; 2. whodunit; 3. entry in the Knives Out canon. Josh Brolin's character was himself fairly flat and cartoonish, but Brolin delivers the performance with enough glee and menace that I didn't mind. I'm not sure when Brolin started shifting in my mind from a fairly generic actor to someone who knew exactly how to play against himself (Maybe it was Hail Caesar) but I'm almost never not excited to see him on screen. - Once again Johnson resorts to lampshading his influences (this time with an explicit syllabus!)

0 views
David Dodda 7 months ago

Laughing in the Face of Fear: How I Accidentally Rewired My Brain Through Movies

"Why do you keep smiling?" My friend's puzzled voice cut through the theater's surround sound as yet another jump scare filled the screen. I hadn't even realized I was doing it. There I was, grinning like an idiot while a demon wreaked havoc on screen - the same kind of creature that would have sent me diving behind couch cushions just a few years ago. "That demon is kind of cute," I whispered back, and immediately wondered where those words had come from. Walking out of that Conjuring: The Last Rites screening, I couldn't shake the question: when did I stop being afraid of horror movies? More importantly, how did it happen without me even noticing? I've always been a scaredy-cat. Horror movies were my kryptonite, the kind of films that left me sleeping with the lights on and checking under beds like a paranoid child. So this newfound ability to chuckle at cinematic terror felt like discovering I could suddenly speak a foreign language. As I reflected on this mysterious transformation, three influences kept surfacing in my memory, all carrying the same powerful message: fear loses its grip when you laugh at it . The first was Stephen King's IT , specifically the scene where the Losers Club finally confronts Pennywise. These kids, terrorized by an ancient cosmic horror, make a crucial discovery: the creature that feeds on fear becomes pathetically small when mocked. They literally bully the bully, turning their terror into ridicule. "You're just a clown!" they shout, and suddenly this omnipotent force becomes just another playground antagonist. The second was from Harry Potter and the Prisoner of Azkaban . Professor Lupin teaches his students to defeat boggarts, creatures that manifest as your worst fear, with the Riddikulus spell. The magic isn't in complex incantations; it's in forcing yourself to imagine your fear in something ridiculous. Snape in your grandmother's dress. A spider wearing roller skates. Fear transformed into comedy. The third influence was more gradual but perhaps most impactful: discovering Wendigoon's YouTube channel. Here was someone who approached the most unsettling horror content, creepypastas, urban legends, true crime, with genuine curiosity and often infectious humor. Watching him dissect a terrifying story with the enthusiasm of a literature professor made me realize that scary content was just content . When you pull back the curtain and analyze the mechanics of horror, the monsters become fascinating rather than frightening. His approach taught me that you could respect the craft of scary storytelling while refusing to be intimidated by it. All three sources delivered the same revolutionary idea: laughter is fear's kryptonite . Without realizing it, these influences had planted something in my subconscious. They'd offered me a new framework for processing scary situations, not as threats to flee from, but as puzzles to solve or absurdities to find amusing. The demon in The Conjuring: The Last Rites wasn't a harbinger of nightmares; it was just another creature stumbling through its prescribed horror movie beats, probably worried about hitting its jump-scare quotas. This shift represents something profound about how we consume media. Stories don't just entertain us; they literally rewire our neural pathways, teaching us new ways to interpret and respond to the world. Every hero's journey we follow, every coping mechanism we witness, becomes part of our own psychological toolkit. The beautiful thing about this accidental transformation is how it's changed my relationship with fear beyond just movies. That presentation at work that used to paralyze me? Now I picture the audience in their underwear, not because someone told me to, but because I've learned that fear shrinks under the spotlight of absurdity. The anxiety-inducing news cycle? Sometimes I can step back and see the cosmic comedy in our collective human drama, the way we all scramble around taking ourselves so seriously on this tiny rock spinning through space. This doesn't mean becoming callous or dismissing real dangers. It means developing the superpower to choose your response to fear, to ask yourself whether this particular monster deserves your terror or your chuckles. Maybe the most magical thing about both IT and Harry Potter isn't the supernatural elements, it's the reminder that we have more control over our inner landscape than we think. Every day, we're casting spells on ourselves with the stories we tell, the media we consume, the frameworks we adopt for making sense of the world. Sometimes, without even realizing it, we learn to laugh in the face of fear. And once you've done that, you discover something wonderful: most of our demons are just wearing costumes, waiting for someone brave enough to point and giggle. Riddikulus , indeed.

0 views
Justin Duke 9 months ago

Bodies Bodies Bodies

More than anything else, Bodies Bodies Bodies is a perfectly reasonable and delightful way to spend 90 minutes. It is beautiful, well-acted, and consistently funny, with a banger soundtrack and a propulsive pace

0 views
Max Woolf 9 months ago

Predicting Average IMDb Movie Ratings Using Text Embeddings of Movie Metadata

Months ago, I saw a post titled “ Rejected from DS Role with no feedback ” on Reddit’s Data Science subreddit , in which a prospective job candidate for a data science position provided a Colab Notebook documenting their submission for a take-home assignment and asking for feedback as to why they were rejected. Per the Reddit user, the assignment was: Use the publicly available IMDB Datasets to build a model that predicts a movie’s average rating. Please document your approach and present your results in the notebook. Make sure your code is well-organized so that we can follow your modeling process. IMDb , the Internet Movie Database owned by Amazon, allows users to rate movies on a scale from 1 to 10, wherein the average rating is then displayed prominently on the movie’s page: The Shawshank Redemption is currently the highest-rated movie on IMDb with an average rating of 9.3 derived from 3.1 million user votes. In their notebook, the Redditor identifies a few intuitive features for such a model, including the year in which the movie was released, the genre(s) of the movies, and the actors/directors of the movie. However, the model they built is a TensorFlow and Keras -based neural network, with all the bells-and-whistles such as batch normalization and dropout . The immediate response by other data scientists on /r/datascience was, at its most polite, “why did you use a neural network when it’s a black box that you can’t explain?” Reading those replies made me nostalgic. Way back in 2017, before my first job as a data scientist, neural networks using frameworks such as TensorFlow and Keras were all the rage for their ability to “ solve any problem ” but were often seen as lazy and unskilled compared to traditional statistical modeling such as ordinary least squares linear regression or even gradient boosted trees. Although it’s funny to see that perception against neural networks in the data science community hasn’t changed since, nowadays the black box nature of neural networks can be an acceptable business tradeoff if the prediction results are higher quality and interpretability is not required. Looking back at the assignment description, the objective is only “predict a movie’s average rating.” For data science interview take-homes, this is unusual: those assignments typically have an extra instruction along the lines of “explain your model and what decisions stakeholders should make as a result of it”, which is a strong hint that you need to use an explainable model like linear regression to obtain feature coefficients, or even a middle-ground like gradient boosted trees and its variable importance to quantify relative feature contribution to the model. 1 In absence of that particular constraint, it’s arguable that anything goes, including neural networks. The quality of neural networks have improved significantly since 2017, even moreso due to the massive rise of LLMs. Why not try just feeding a LLM all raw metadata for a movie and encode it into a text embedding and build a statistical model based off of that? Would a neural network do better than a traditional statistical model in that instance? Let’s find out! The IMDb Non-Commercial Datasets are famous sets of data that have been around for nearly a decade 2 but are still updated daily. Back in 2018 as a budding data scientist, I performed a fun exporatory data analysis using these datasets, although the results aren’t too surprising. The average rating for a movie is around 6 and tends to skew higher: a common trend in internet rating systems. But in truth, these datasets are a terrible idea for companies to use for a take-home assignment. Although the datasets are released under a non-commercial license, IMDb doesn’t want to give too much information to their competitors, which results in a severely limited amount of features that could be used to build a good predictive model. Here are the common movie-performance-related features present in the file: This is a sensible schema for describing a movie, although it lacks some important information that would be very useful to determine movie quality such as production company, summary blurbs, granular genres/tags, and plot/setting — all of which are available on the IMDb movie page itself and presumably accessible through the paid API . Of note, since the assignment explicitly asks for a movie ’s average rating, we need to filter the data to only and entries, which the original assignment failed to do. The ratings data in is what you’d expect: In order to ensure that the average ratings for modeling are indeed stable and indicative of user sentiment, I will only analyze movies that have atleast 30 user votes : as of May 10th 2025, that’s about 242k movies total. Additionally, I will not use as a model feature, since that’s a metric based more on extrinsic movie popularity rather than the movie itself. The last major dataset is , which has very helpful information on metadata such as the roles people play in the production of a movie: Additionally, because the datasets are so popular, it’s not the first time someone has built a IMDb ratings predictor and it’s easy to Google. Instead of using the official IMDb datasets, these analyses are based on the smaller IMDB 5000 Movie Dataset hosted on Kaggle, which adds metadata such as movie rating, budget, and further actor metadata that make building a model much easier (albeit “number of likes on the lead actor’s Facebook page” is very extrinsic to movie quality). Using the official datasets with much less metadata is building the models on hard mode and will likely have lower predictive performance. Although IMDb data is very popular and very well documented, that doesn’t mean it’s easy to work with. Data science take-home assignments are typically 1/2 exploratory data analysis for identifying impactful dataset features, and 1/2 building, iterating, and explaining the model. For real-world datasets, these are all very difficult problems with many difficult solutions, and the goal from the employer’s perspective is seeing more how these problems are solved rather than the actual quantitative results. The initial Reddit post decided to engineer some expected features using pandas , such as by checking whether a non- number is present at the end of a movie title and one-hot encoding each distinct of a movie. These are fine for an initial approach, albeit sequel titles can be idiosyncratic and it suggests that a more NLP approach to identifying sequels and other related media may be useful. The main trick with this assignment is how to handle the principals. The common data science approach would be to use a sparse binary encoding of the actors/directors/writers, e.g. using a vector where actors present in the movie are and every other actor is , which leads to a large number of potential approaches to encode this data performantly, such as scikit-learn’s MultiLabelBinarizer . The problem with this approach is that there are a very large number of unique actors / high cardinality — more unique actors than data points themselves — which leads to curse of dimensionality issues and workarounds such as encoding only the top N actors will lead to the feature being uninformative since even a generous N will fail to capture the majority of actors. There are actually 624k unique actors in this dataset ( Jupyter Notebook ), the chart just becomes hard to read at that point. Additionally, most statistical modeling approaches cannot account for the of actors as they treat each feature as independent, and since the billing order of actors is generally correlated to their importance in the movie, that’s an omission of relevant information to the problem. These constraints gave me an idea: why not use an LLM to encode all movie data, and build a model using the downstream embedding representation? LLMs have attention mechanisms , which will not only respect the relative ordering of actors (to give higher predictive priority to higher-billed actors, along with actor cooccurrences), but also identify patterns within movie name texts (to identify sequels and related media semantically). I started by aggregating and denormalizing all the data locally ( Jupyter Notebook ). Each of the IMDb datasets are hundreds of megabytes and hundreds of thousands of rows at minimum: not quite big data , but enough to be more cognizant of tooling especially since computationally-intensive JOINs are required. Therefore, I used the Polars library in Python, which not only loads data super fast, but is also one of the fastest libraries at performing JOINs and other aggregation tasks. Polars’s syntax also allows for some cool tricks: for example, I want to spread out and aggregate the principals (4.1 million rows after prefiltering) for each movie into directors, writers, producers, actors, and all other principals into nested lists while simultaneously having them sorted by as noted above. This is much easier to do in Polars than any other data processing library I’ve used, and on millions of rows, this takes less than a second : After some cleanup and field renaming, here’s an example JSON document for Star Wars: Episode IV - A New Hope : I was tempted to claim that I used zero feature engineering, but that wouldn’t be accurate. The selection and ordering of the JSON fields here is itself feature engineering: for example, and are intentionally last in this JSON encoding because they can have wildly varying lengths while the prior fields are more consistent, which should make downstream encodings more comparable and consistent. Now, let’s discuss how to convert these JSON representations of movies into embeddings. LLMs that are trained to output text embeddings are not much different from LLMs like ChatGPT that just predict the next token in a loop. Models such as BERT and GPT can generate “embeddings” out-of-the-box by skipping the prediction heads of the models and instead taking an encoded value from the last hidden state of the model (e.g. for BERT, the first positional vector of the hidden state representing the token). However, text embedding models are more optimized for distinctiveness of a given input text document using contrastive learning . These embeddings can be used for many things, from finding similar encoded inputs by identifying the similarity between embeddings, and of course, by building a statistical model on top of them. Text embeddings that leverage LLMs are typically generated using a GPU in batches due to the increased amount of computation needed. Python libraries such as Hugging Face transformers and sentence-transformers can load these embeddings models. For this experiment, I used the very new Alibaba-NLP/gte-modernbert-base text embedding model that is finetuned from the ModernBERT model specifically for the embedding use case for two reasons: it uses the ModernBERT architecture which is optimized for fast inference , and the base ModernBERT model is trained to be more code-aware and should be able understand JSON-nested input strings more robustly — that’s also why I intentionally left in the indentation for nested JSON arrays as it’s semantically meaningful and explicitly tokenized . 3 The code ( Jupyter Notebook ) — with extra considerations to avoid running out of memory on either the CPU or GPU 4 — looks something like this: I used a Spot L4 GPU on Google Cloud Platform at a pricing of $0.28/hour, and it took 21 minutes to encode all 242k movie embeddings: about $0.10 total, which is surprisingly efficient. Each of these embeddings is a set of 768 numbers (768D). If the embeddings are unit normalized (the step), then calculating the dot product between embeddings will return the cosine similarity of those movies, which can then be used to identify the most similar movies. But “similar” is open-ended, as there are many dimensions how a movie could be considered similar. Let’s try a few movie similarity test cases where I calculate the cosine similarity between one query movie and all movies, then sort by cosine similarity to find the most similar ( Jupyter Notebook ). How about Peter Jackson’s Lord of the Rings: The Fellowship of the Ring ? Ideally, not only would it surface the two other movies of the original trilogy, but also its prequel Hobbit trilogy. Indeed, it worked and surfaced both trilogies! The other movies listed are about the original work, so having high similarity would be fair. Compare these results to the “ More like this ” section on the IMDb page for the movie itself, which has the two sequels to the original Lord of the Rings and two other suggestions that I am not entirely sure are actually related. What about more elaborate franchises, such as the Marvel Cinematic Universe ? If you asked for movies similar to Avengers: Endgame , would other MCU films be the most similar? The answer is yes, which isn’t a surprise since those movies share many principals. Although, there are instances of other movies named “Endgame” and “The Avengers” which are completely unrelated to Marvel and therefore implies that the similarities may be fixated on the names. What about movies of a smaller franchise but a specific domain, such as Disney’s Frozen that only has one sequel? Would it surface other 3D animated movies by Walt Disney Animation Studios , or something else? …okay, it’s definitely fixating on the name. Let’s try a different approach to see if we can find more meaningful patterns in these embeddings. In order to visualize the embeddings, we can project them to a lower dimensionality with a dimensionality reduction algorithm such as PCA or UMAP : UMAP is preferred as it can simultaneously reorganize the data into more meaningful clusters. UMAP’s construction of a neighborhood graph , in theory, can allow the reduction to refine the similarities by leveraging many possible connections and hopefully avoid fixating on the movie name. However, with this amount of input data and the relatively high initial 768D vector size, the computation cost of UMAP is a concern as both factors each cause the UMAP training time to scale exponentially. Fortunately, NVIDIA’s cuML library recently updated and now you can run UMAP with very high amounts of data on a GPU at a very high number of epochs to ensure the reduction fully converges, so I did just that ( Jupyter Notebook ). What patterns can we find? Let’s try plotting the reduced points, colored by their user rating. So there’s a few things going on here. Indeed, most of the points are high-rating green as evident in the source data. But the points and ratings aren’t random and there are trends. In the center giga cluster, there are soft subclusters of movies at high ratings and low ratings. Smaller discrete clusters did indeed form, but what is the deal with that extremely isolated cluster at the top? After investigation, that cluster only has movies released in 2008, which is another feature I should have considered when defining movie similarity. As a sanity check, I faceted out the points by movie release year to better visualize where these clusters are forming: This shows that even the clusters movies have their values spread, but I unintentionally visualized how embedding drift changes over time. 2024 is also a bizarrely-clustered year: I have no idea why those two years specifically are weird in movies. The UMAP approach is more for fun, since it’s better for the downstream model building to use the raw 768D vector and have it learn the features from that. At the least, there’s some semantic signal preserved in these embeddings, which makes me optimistic that these embeddings alone can be used to train a viable movie rating predictor. So, we now have hundreds of thousands of 768D embeddings. How do we get them to predict movie ratings? What many don’t know is that all methods of traditional statistical modeling also work with embeddings — assumptions such as feature independence are invalid so the results aren’t explainable, but you can still get a valid predictive model. First, we will shuffle and split the data set into a training set and a test set: for the test set, I chose 20,000 movies (roughly 10% of the data) which is more than enough for stable results. To decide the best model, we will be using the model that minimizes the mean squared error (MSE) of the test set, which is a standard approach to solving regression problems that predict a single numeric value. Here are three approaches for using LLMs for solving non-next-token-prediction tasks. You can still fit a linear regression on top of the embeddings even if feature coefficients are completely useless and it serves as a decent baseline ( Jupyter Notebook ). The absolute laziest “model” where we just use the mean of the training set for every prediction results in a test MSE of 1.637 , but performing a simple linear regression on top of the 768D instead results in a more reasonable test MSE of 1.187 . We should be able to beat that handily with a more advanced model. Data scientists familiar with scikit-learn know there’s a rabbit hole of model options, but most of them are CPU-bound and single-threaded and would take considerable amount of time on a dataset of this size. That’s where cuML—the same library I used to create the UMAP projection—comes in, as cuML has GPU-native implementations of most popular scikit-learn models with a similar API. This notably includes support vector machines , which play especially nice with embeddings. And because we have the extra compute, we can also perform a brute force hyperparameter grid search to find the best parameters for fitting each model. Here’s the results of MSE on the test dataset for a few of these new model types, with the hyperparameter combination for each model type that best minimizes MSE: The winner is the Support Vector Machine, with a test MSE of 1.087 ! This is a good start for a simple approach that handily beats the linear regression baseline, and it also beats the model training from the Redditor’s original notebook which had a test MSE of 1.096 5 . In all cases, the train set MSE was close to the test set MSE, which means the models did not overfit either. Since we’re already dealing with AI models and already have PyTorch installed to generate the embeddings, we might as well try the traditional approach of training a multilayer perceptron (MLP) neural network on top of the embeddings ( Jupyter Notebook ). This workflow sounds much more complicated than just fitting a traditional model above, but PyTorch makes MLP construction straightforward, and Hugging Face’s Trainer class incorporates best model training practices by default, although its function has to be tweaked to minimize MSE specifically. The PyTorch model, using a loop to set up the MLP blocks, looks something like this: This MLP is 529k parameters total: large for a MLP, but given the 222k row input dataset, it’s not egregiously so. The real difficulty with this MLP approach is that it’s too effective : even with less than 1 million parameters, the model will extremely overfit and converge to 0.00 train MSE quickly, while the test set MSE explodes. That’s why is set to the atypically high probability of . Fortunately, MLPs are fast to train: training for 600 epochs (total passes through the full training dataset) took about 17 minutes on the GPU. Here’s the training results: The lowest logged test MSE was 1.074 : a slight improvement over the Support Vector Machine approach. There is a possibility that using a pretrained embedding model that was trained on the entire internet could intrinsically contain relevant signal about popular movies—such as movies winning awards which would imply a high IMDb rating—and that knowledge could leak into the test set and provide misleading results. This may not be a significant issue in practice since it’s such a small part of the model which is too small to memorize exact information. For the sake of comparison, let’s try training a LLM from scratch on top of the raw movie JSON representations to process this data to see if we can get better results without the possibility of leakage ( Jupyter Notebook ). I was specifically avoiding this approach because the compute required to train an LLM is much, much higher than a SVM or MLP model and generally leveraging a pretrained model gives better results. In this case, since we don’t need a LLM that has all the knowledge of human existence, we can train a much smaller model that only knows how to work with the movie JSON representations and can figure out relationships between actors and whether titles are sequels itself. Hugging Face transformers makes this workflow surprisingly straightforward by not only having functionality to train your own custom tokenizer (in this case, from 50k vocab to 5k vocab) that encodes the data more efficiently, but also allowing the construction a ModernBERT model with any number of layers and units. I opted for a 5M parameter LLM (SLM?), albeit with less dropout since high dropout causes learning issues for LLMs specifically. The actual PyTorch model code is surprisingly more concise than the MLP approach: Essentially, the model trains its own “text embedding,” although in this case instead of an embedding optimized for textual similarity, the embedding is just a representation that can easily be translated into a numeric rating. Because the computation needed for training a LLM from scratch is much higher, I only trained the model for 10 epochs, which was still twice as slow than the 600 epochs for the MLP approach. Given that, the results are surprising: The LLM approach did much better than my previous attempts with a new lowest test MSE of 1.026 , with only 4 passes through the data! And then it definitely overfit. I tried other smaller configurations for the LLM to avoid the overfitting, but none of them ever hit a test MSE that low. Let’s look at the model comparison again, this time adding the results from training a MLP and training a LLM from scratch: Coming into this post, I’m genuinely thought that training the MLP on top of embeddings would have been the winner given the base embedding model’s knowledge of everything, but maybe there’s something to just YOLOing and feeding raw JSON input data to a completely new LLM. More research and development is needed. The differences in model performance from these varying approaches aren’t dramatic, but some iteration is indeed interesting and it was a long shot anyways given the scarce amount of metadata. The fact that building a model off of text embeddings only didn’t result in a perfect model doesn’t mean this approach was a waste of time. The embedding and modeling pipelines I have constructed in the process of trying to solve this problem have already provided significant dividends on easier problems, such as identifying the efficiency of storing embeddings in Parquet and manipulating them with Polars . It’s impossible and pointless to pinpoint the exact reason the original Reddit poster got rejected: it could have been the neural network approach or even something out of their control such as the original company actually stopping hiring and being too disorganized to tell the candidate. To be clear, if I myself were to apply for a data science role, I wouldn’t use the techniques in this blog post (that UMAP data visualization would get me instantly rejected!) and do more traditional EDA and non-neural-network modeling to showcase my data science knowledge to the hiring manager. But for my professional work, I will definitely try starting any modeling exploration with an embeddings-based approach wherever possible: at the absolute worst, it’s a very strong baseline that will be hard to beat. All of the Jupyter Notebooks and data visualization code for this blog post is available open-source in this GitHub repository . I am not a fan of using GBT variable importance as a decision-making metric: variable importance does not tell you magnitude or direction of the feature in the real world, but it does help identify which features can be pruned for model development iteration.  ↩︎ To get a sense on how old they are, they are only available as TSV files , which is a data format so old and prone to errors that many data libraries have dropped explicit support for it. Amazon, please release the datasets as CSV or Parquet files instead!  ↩︎ Two other useful features of but not strictly relevant to these movie embeddings are a) its a cased model so it can identify meaning from upper-case text and b) it does not require a prefix such as and as nomic-embed-text-v1.5 does to guide its results, which is an annoying requirement for those models.  ↩︎ The trick here is the function for the computed embeddings, otherwise the GPU doesn’t free up the memory once moved back to the CPU. I may or may not have discovered that the hard way.  ↩︎ As noted earlier, minimizing MSE isn’t a competition, but the comparison on roughly the same dataset is good for a sanity check.  ↩︎ tconst : unique identifier of the title titleType : the type/format of the title (e.g. movie, tvmovie, short, tvseries, etc) primaryTitle : the more popular title / the title used by the filmmakers on promotional materials at the point of release isAdult : 0: non-adult title; 1: adult title startYear : represents the release year of a title. runtimeMinutes : primary runtime of the title, in minutes genres : includes up to three genres associated with the title tconst : unique identifier of the title (which can therefore be mapped to movie metadata using a JOIN) averageRating : average of all the individual user ratings numVotes : number of votes the title has received tconst : unique identifier of the title (which can be mapped to movie data using a JOIN) nconst : unique identifier of the principal (this is mapped to to get the principal’s , but nothing else useful) category : the role the principal served in the title, such as , , , , etc. ordering : the ordering of the principals within the title, which correlates to the order the principals appear on IMDb’s movie cast pages. I am not a fan of using GBT variable importance as a decision-making metric: variable importance does not tell you magnitude or direction of the feature in the real world, but it does help identify which features can be pruned for model development iteration.  ↩︎ To get a sense on how old they are, they are only available as TSV files , which is a data format so old and prone to errors that many data libraries have dropped explicit support for it. Amazon, please release the datasets as CSV or Parquet files instead!  ↩︎ Two other useful features of but not strictly relevant to these movie embeddings are a) its a cased model so it can identify meaning from upper-case text and b) it does not require a prefix such as and as nomic-embed-text-v1.5 does to guide its results, which is an annoying requirement for those models.  ↩︎ The trick here is the function for the computed embeddings, otherwise the GPU doesn’t free up the memory once moved back to the CPU. I may or may not have discovered that the hard way.  ↩︎ As noted earlier, minimizing MSE isn’t a competition, but the comparison on roughly the same dataset is good for a sanity check.  ↩︎

0 views
maxdeviant.com 1 years ago

Crouching Tiger, Hidden Dragon

I've seen bits and pieces of this movie over my life, but had yet to sit down and watch the entire thing. I remember when I was a kid my parents had some Chinese friends over to watch it. I was watching along with them, captivated by the action sequences. But I had done something to one of my siblings earlier that day that resulted in my being banished to the 阳台 and was unable to finish the rest of the movie. Then in high school I also got to watch the first bit of the movie in Chinese class, but once again never got to finish it. When I saw it listed on Max, I was worried that it might be an English dub, but thankfully it had the original Mandarin audio with English subtitles. Amusingly, I had to turn off the second set of subtitles in the app so that they wouldn't double up with the in-movie subtitles. Watching it as an adult, I was able to actually follow and get invested in the plot. The action sequences—especially those towards the beginning of the movie—were just as I remember them: tight martial arts combat and fanciful acrobatics. The camera work during these fight scenes is phenomenal and really lends to the action. In the downtime between fights I found myself getting immersed in the world, aided by the elaborate set pieces and shots of beautiful landscapes. You really feel transported back to ancient China. I think the movie has held up very well for being 24 years old, at this point. The wire fu might seem a bit cheesy to someone unfamilar with 武俠 media, but it took me right back to the glimpses of 武俠 TV shows that I've seen over the years.

0 views