Posts in Android (11 found)
Ivan Sagalaev 1 weeks ago

Pet project restart

So what happened was, I have developed my shopping list to the point where it got useful to me , after which I lost interest in working on it. You know, the usual story… It was however causing me enough annoyances to still want to get back to it eventually. So a few weeks ago, after not having done any programming for a year, I finally broke through the dread of launching my IDE again and started on slowly fixing the accumulated bitrot. And through the last several days I was on a blast implementing some really useful stuff and feeling the familiar thrill of being in the flow . Since I was mostly focused on making the app useful I didn't pay a lot of attention to the UI, so most of the annoyances were caused purely by my not wanting to spend much time on fighting Android APIs. Here's one of those. The app keeps several shopping lists in a swipe-able pager, and at the same time swiping is how you remove items from the list while going through the store. The problem was that swiping individual items was really sensitive to a precise finger movement, so instead it would often be intercepted by the pager and it would switch to the next list instead. That's fixed now (with an ugly hack). But the biggest deficiency of the app was that it didn't let me get away from one particular grocery store that I started to rather dislike. You might find it weird that some app could exert such control over my actions, but let me explain. It all comes down to three missing features… The central feature of my app is remembering the order in which I buy grocery items. This means I need a separate list for every store, as every one of them has a different physical layout. By the time I was thinking of switching to another store I already had an idea about a new evolution of the order training algorithm in the app, and a new store would be a great dogfooding use case for it. So I've got a sort of mental block: I didn't want to switch stores before I implemented this new algorithm. Over some years of using the app with a single store I've been manually associating grocery categories with products ("dairy", "produce", etc.). They are color coded, which make the list easier to scan visually. But starting a new list for another store meant that I would either need to do it all again for every single item, or accept looking at a dull, unhelpful gray list. What I really needed was some smart automatic prediction, but I didn't have it. I usually collect items in a list over a week for an upcoming visit to the store, and sometimes I realize that I need something that it simply doesn't carry, or my other errands would make it easier to go to another store. At this point I'd like to select all the items in a filled-up list and move them to another, which the app also couldn't do. See, it all makes sense! Now, of course it wasn't a literal impossibility for me to go to other stores, and on occasion I did, it just wasn't very convenient. But these are all pretty major deficiencies, and I'm not ready to offer the app to other people without them sorted out. Anyway… Over the course of three weeks I implemented two of those big features: category guessing and cross-store moves. And I convinced myself that I can live with the old ordering algorithm for a while. So now I can finally wean myself off of the QFC on Redmond Way (which keeps getting worse, by the way) and start going to another QFC (a completely different experience). All the categories (item colors) you see in the screencaps above were guessed automatically. My prediction model works pretty well on my catalog of 400+ grocery items: the data comes from me tagging them manually while doing my own shopping these past 4 years. And this also means, of course, that it's biased towards what I tend to buy. It doesn't know much about alcohol or frozen ready-to-eat foods, for example. I'm planning to put up a little web app to let other people help me train it further. I'll keep y'all posted! One important note though… No, it's not a frigging LLM! It's technically not even ML , as there is no automatic calibration of weights in a matrix or anything. Instead it's built on a funny little trick I learned at Shutterstock while working on a search suggest widget. I'll tell you more when I launch the web app. When I started developing the app, I used the official UI toolkit documented on developer.android.com. It's a bunch of APIs with a feel of a traditional desktop GUI paradigm (made insanely complicated by Google "gurus"). Then the reactive UI revolution happened, and if you wanted something native for Android, it was represented by Flutter . Now they're recommending Compose . I'm sure both are much better than the legacy APIs, but I'm kind of happy I wasn't looking in this space for a few years and wasn't tempted to rewrite half the code. Working in the industry made me very averse to constant framework churn. I'm not making any promises, but as the app is taking shape rather nicely, I'm again entertaining the idea of actually… uhm… finishing it. Which would mean beta testing, commissioning professional artwork and finally selling the final product. The central feature of my app is remembering the order in which I buy grocery items. This means I need a separate list for every store, as every one of them has a different physical layout. By the time I was thinking of switching to another store I already had an idea about a new evolution of the order training algorithm in the app, and a new store would be a great dogfooding use case for it. So I've got a sort of mental block: I didn't want to switch stores before I implemented this new algorithm. Over some years of using the app with a single store I've been manually associating grocery categories with products ("dairy", "produce", etc.). They are color coded, which make the list easier to scan visually. But starting a new list for another store meant that I would either need to do it all again for every single item, or accept looking at a dull, unhelpful gray list. What I really needed was some smart automatic prediction, but I didn't have it. I usually collect items in a list over a week for an upcoming visit to the store, and sometimes I realize that I need something that it simply doesn't carry, or my other errands would make it easier to go to another store. At this point I'd like to select all the items in a filled-up list and move them to another, which the app also couldn't do.

0 views
Danny McClelland 5 months ago

2025 Privacy Reboot: Six Month Check-In

Six months ago, I wrote about my privacy reboot — a gradual shift toward tools that take both privacy and security seriously. It was never about perfection or digital purity, but about intentionality. About understanding which tools serve me, rather than the other way around. Here’s how it’s actually gone. The Wins Ente continues to impress. The family photo migration is complete, and the service has been rock solid. The facial recognition quirks I mentioned on Android have largely sorted themselves out, and the peace of mind knowing our family memories aren’t feeding Google’s advertising machine feels worth the subscription cost.

0 views
Kaushik Gopal 7 months ago

Taildrop - transfer files between Android and MacOS

I use the Pixel 9 Pro as my daily driver and love it. But one notable feature I’ve always missed from the Apple ecosystem is AirDrop . I typically have WhatsApp for Mac open and dump things there for quick access between devices. Clunky, but it worked. Until I realized Tailscale — a VPN 1 that I love and use — has a feature called Taildrop . The name suggests it’s meant as an AirDrop competitor, and I’ve been super happy with it. I can now send images uncompressed 2 to any of my devices. I’m not restricted by Bluetooth, proximity, or other limitations. Just need Tailscale running on the device. Probably the highlight here is that Taildrop works on any device that supports Tailscale. So between a Pixel, iPhone, MacBook, Windows machine — no walls here. There’s no filesize limit. In all fairness, AirDrop doesn’t have one either. Another alpha feature they’re working on, similar to Taildrop, is Taildrive . It’s effectively a shared folder on the internet. If you’re on the same network (Tailnet), you can download files from this folder from anywhere. This basically allows any folder on your device to become a mini NAS or file server. Bonkers! It’s not all sunshine and rainbows though. There are some constraints and issues with Taildrop. Even if you add another user to your Tailscale network, you can still only send files to yourself. I imagine this is still an early restriction, but the good news is that Tailscale is thinking about relaxing this over time (maybe). This is clearly an area where AirDrop is superior since it can more conveniently allow you to send files to anyone nearby. Note this doesn’t apply to Taildrive, just to Taildrop. With Taildrive, anyone on your Tailnet can access the shared files. AirDrop handles sharing URLs or links elegantly. Share a URL from your phone to computer, and it automatically opens in your default browser. You can’t share links with Taildrop yet. When Taildropping from Android, I’ve noticed it occasionally caches the previous files that were sent. If you’re not carefully watching the preview, you can end up sending the wrong file. In all fairness, Tailscale has labeled it as an alpha, so they get a pass on this. Their solutions are typically rock solid, so I fully trust when they graduate it from alpha, it’ll be production ready. Despite these limitations, Taildrop has become an essential tool in my cross-device workflow. For Android users longing for AirDrop-like functionality, it’s worth setting up Tailscale for this feature alone. And if you’re already using Tailscale for other purposes , enabling Taildrop is a no-brainer. VPN might conjure up a different picture here. Think of it simply as a secure tunnel to any of your other devices.  ↩︎ One of the notable problems with my lazy WhatsApp strategy.  ↩︎ VPN might conjure up a different picture here. Think of it simply as a secure tunnel to any of your other devices.  ↩︎ One of the notable problems with my lazy WhatsApp strategy.  ↩︎

0 views
Kaushik Gopal 9 months ago

Age of the AI phone

In Vinay’s latest newsletter , he asks a few of us #AndroidDev to predict what the future of Android development is going to look like. Yours truly had this as one of the predictions: AI everything 🙄 … On the product side, we’ll see more on-device AI, with smaller models like Gemini Flash/o3-mini running locally to provide operator-like intelligence directly on phones and this will probably be what most folks are geared towards doing for mobile development. Looking at this Youtube video that’s now doing the rounds, I’m starting to feel ever so slightly validated. Google (or Samsung) truly have their shot of gaining significant Mobile market share again, especially given how much Apple has been floundering in the AI space.

0 views
Luke Hsiao 9 months ago

How to connect MTP to a kids profile on a Fire tablet

This is mostly a note-to-self. I’m a father with a kid that is starting to get old enough to appreciate some screen time. To that end, we got a cheap Amazon Fire tablet (sidenote, it seems Amazon has a corner on this market). It was nice, we set up a kid’s profile, disabled the store, disabled calling, disabled the web browser, shared VLC (for personal media) and AnkiDroid (for educational flashcards). But, then I couldn’t figure out for the life of me how to get media onto the internal storage in a way that was visible to that kid profile! Here’s the trick: Step 3 in particular was not obvious to me. Log in to the kid profile. Bring down the settings menu by dragging down on the top. Press and hold the Bluetooth icon until a PIN prompt appears. Enter your pin. The “Connected devices” menu will open, where you can change the USB setting to File transfer. Now, the internal storage reflects this kid profile specifically.

0 views

Airplane Mode for Glass

I've built my first little piece of software for Google Glass. I flew home from SF yesterday and realized that there was no way (short of installing a very crashy Settings.apk) to enable Airplane Mode on my Glass. That seemed like a reasonable enough "small" starter project. This is really, really only for folks who are already comfortable running third party apps on their Glass. If you don't know how to sideload apps with adb, please don't install this.  You can grab the initial build at: https://www.dropbox.com/s/rtbt7vc3bz67j3c/GlassAirplane-0.1.apk Source lives at  https://github.com/obra/GlassAirplane Patches welcome!

0 views

Recon MOD Live - a hackable $300 Android wearable (sort of)

I got a MOD Live HUD from Recon Instruments today. It is, indeed, running Gingerbread. As it turns out, if you can get it to install an update.zip, it won't check the signatures. Which means it's pretty easy to root :) Cracking it open, the display is a Kopin, though I don't yet know which model. It's designed as a look-around display. The prism has a mirrored backing and is wrapped in black plastic. The mirror coating on the prism comes off quite easily with a bit of rubbing alcohol. So yeah, rootable Android (2.3) wearable computer. $300. More details as the story develops.

0 views

Google I/O 2013 & Google Glass

Growing up and hanging out near Cambridge, MA, I was always fascinated by the "mediaborgs" - the folks around the Media Lab who were building and using wearable computers. I spent a lot of time trying to figure out how I could get myself a rig. At the time, the $1000+ for a heads-up display was more than I could pull off. I played around with sticking the tiniest laptop I could find (and even a bit of PC104 kit) in a bag and using a Twiddler and Emacs with T.V. Raman 's  emacspeak to have a walking-around computing environment with an audio interface. It was pretty neat, but incredibly clunky. I never really got the hang of it.  Over the years, I made a bunch of half-hearted attempts to get my hands on head mounted display that was functional enough to use and small enough to actually wear. I'd occasionally look around to see if anyone was selling something that seemed workable. Occasionally, I'd poke at http://tekgear.com/hmd.html to see if there was anything that looked reasonable. Generally, though, the cheap options cost around $1000 and are really intended for immersive video or gaming experiences. (Or they're upwards of $10,000 and intended for defense and industrial applications.)  Needless to say, Google Glass somewhat piqued my interest. Google aren't yet making Glass available to folks like me who played the #ifihadglass game. It sounds like they're just getting a handle on the initial production run for folks who were at Google I/O last year. I got to try Glass pretty early in the conference. The friend demoing it for me was pretty happy with his, but the functionality he was able to show me was...very basic. To a first approximation, all you can do with the current "Mirror" API is to push snippets of text or HTML+CSS to be displayed in the upper-right corner of the wearer's vision. At one of the early Glass talks at I/O, the speaker mentioned that a "GDK" to allow native development was coming soon. Glass is, indeed, Android under the hood. (4.0.x for now) Suddenly, this was looking a little more interesting. A couple weeks ago, +Jay Freeman (@saurik) made news by finding an exploit that allowed him to gain root on his Glass. Since then, there's been a bit of a of a hacker scene growing up around Glass. At dinner on Thursday, I  saw a demo of a patched version of Glass Home running on an Android phone. I've heard reports of a homebrew Glass lock-screen app with an improved guest mode, too. The one session at I/O that I was not going to miss was + Hyunyoung Song   & + P.Y. Laligand   talk on "Voiding your warranty: Hacking glass" ( Linked below, since I can't figure out how to inline it in G+.) Having been shut out of a few over-full sessions earlier in the conference, I went and sat second-row center at the previous session in the same room -- and learned a bunch of useful stuff about what's coming in Google Analytics. (During the GA session, I was seated next to a guy who looked to be trying to get his new Chromebook Pixel into developer mode. I...tried to be helpful. I was a little bit embarrassed to realize that  he was none other than + Liam McLoughlin   (@hexxeh), who, uh, knows a little bit about ChromeOS.  Getting there early was a good call. The session was packed.  Really packed. H.Y. and P.Y. demoed how to use adb to push a launcher app and a settings app to your Glass and how to pair a Bluetooth HID device (which just works) and talked a little bit about what one can do by treating Glass as just a "regular" Android device. Porting + K-9 Mail   looks incredibly plausible. I'm really glad we never gave up on QVGA support. Then they got into the good stuff. How to unlock and root your Glass. It's.. really easy. And exactly how you'd assume you'd do it.  https://plus.google.com/118132270929426815661/posts/4MyjGZhN575 is cameraphone shot of the slide from their deck. To explain just how far one could go, H.Y and P.Y. demoed that one could use one of the Linux Installers on the Play Store to install an Ubuntu chroot on Glass. They said that they'd gotten the idea for the demo from + Greg Priest-Dorman   who "does his development in Emacs on Glass."  The world of computing is a very small place. I remember corresponding with Greg when he was at Vassar in the late '90s. If I recall correctly, my friend + Dave Barker  mentioned to Greg that I had a Twiddler I hadn't fallen in love with. He was hoping to get to try out and I was a flaky Wesleyan undergrad, though I'm pretty sure we met and he showed me his wearable when I finally got up to visit friends at Vassar. Chatting with a few other Googlers, it sounds like there's a fair contingent of Glass developers who use emacs (and possibly emacspeak) on Glass. So yeah, after the Hacking Glass session, I..really, really want to get back to wearables stuff. As soon as I can get my hands on a Glass, I will. I think I've found something to tide me over. On more than one occasion, Artur Bergman has told me how amazingly amazing his ski goggles with a heads-up display are. They have a bunch of skiing-related sensors. I just sort of assumed that they had some little microcontroller and a custom OLED superimposed on the faceplate. I was wrong. The folks who make the goggles, http://reconinstruments.com ,  were exhibiting at I/O. Their next gen product, "Jet",  is a Glass-esque setup with (not-see-through) HMD, an HD camera, bluetooth, wifi, a gigahertz ARM chip running what they say will be a fully unlocked build of Jellybean capable of running regular Android apps. It's going to ship "later in 2013" for "less than a thousand dollars." So, that's pretty cool. But I can't have one today. As I talked to them a bit more about their existing product, I found out that it was...not quite what I expected. It's a QVGA (320x240) display that they say looks like a 14" screen 5 feet away. It's powered by...a device running (the slightly dated) Android Gingerbread. (In a previous version of this post, I accidentally said it was running Froyo)  Me: "So, I could buy a set of your ski goggles for $449 and rip them apart and get a wearable computer running Android with a heads up display." Guy from Recon: "Well, you could.The HUD is designed to be taken out of one set of ski goggles and put in new goggles when you upgrade. Bu t, that's kind of a pain in the neck. It'd be easier and cheaper just to buy the HUD from our webshop as a standalone unit. It's $300." Me: "..." Me: "..." Me: "And this is shipping? I can order it today and you already have them in stock?" Recon: "Oh yeah, I mean this is the old model. It's been out for a while. We've actually discounted it from $400 to $300. It's running Froyo. The new one is much nicer and will be out later this year." Me: "Please take my money" So yeah. $300 wearable Android device. Has been shipping for quite a while. You can buy one today. I ordered mine before blogging about it. I'll report back once I've gotten to play with it. To answer the obvious question: Yes, I will be building a version of K-9 Mail for heads up displays. To answer the other obvious question: Yes, I'm going to be playing with building a Bluetooth input device or two for Android wearables.

0 views