FrameQuery Figma plugin: Like CSS container queries in your Figma components and frames

I currently lead an enterprise digital Product Design team, and I'm also design co-lead (together with my dev co-lead) for our Design System. We already have multiple UI (code) components that use container queries. This is gonna be soooooooooo helpful on the design model side, because with this plugin, our designers don't have to remember when to manually swap layouts on our models.

First iteration took about 10-12 hours total, and about 100~ credits on my personal Pro account. Fixing the bugs took another 200 or so credits. Learned quite a bit of stuff along the way though. Adding support for components imported into libraries (without which the plugin is basically pointless) took another 300~ credits.

Still need to get it cleaned up, it's a mess, but at least the core bugs are fixed.

This is a working copy, and here's how to use it, if you're curious. :)

Load & use FrameQuery Figma Plugin

  1. In Figma desktop app ONLY, open a Figma file with components that you want frame queries on.
  2. Right-click on empty space in Figma canvas: Plugins > Development > Import Plugin from Manifest.
  3. In FrameQuery 1.0.30 > new-plugin > select manifest.json.
  4. Click on "Nugget's Frame Queries" and the plugin should load.
  5. In the component that you want to have frame queries on, add a new Property FQ-size. Needs to be exactly this string incl. capitalisation. You can name the variants anything you like, for a given value of "anything". Spaces are supported, but there are some other characters that might not be.
  6. FrameQuery should dynamically pull your variants from anything you set in FQ-size.
  7. With the component selected, turn FrameQuery on.
  8. Set your breakpoints. A max is needed for your biggest breakpoint, just make it something silly like 9999px.
  9. Pop a component instance onto the canvas, stick it in a frame, and resize the frame. The current version only cares about width, but I might enhance with height later.

FrameQuery appends 🤖 to component names that have FQ enabled, and prepends 🤖 to frames that contain component instances with FQ enabled, so we can keep track of 'em without messing up how our prototypes look in demos.

FrameQuery also works with components imported from libraries (1.0.30)

  1. Follow steps 1-8 from above in your library file, and publish the library.
  2. Close the library file (you don't need to have it open).
  3. In your target file, load FQ.
  4. Pop in the component instance from your library file, just like you normally would. This is the component with 🤖 appended to its name.
  5. Pop the component instance in a frame.
  6. 🤖 is prepended to the frame name, and the frame is now responsive.







BDO Barter Planner

Proliferate little web apps, proliferate! Doesn't make up for the crap that's polluting the interwebs in terms of content, but I guess at least I can have my own little web apps now.

I got sick of writing the same stuff in Notepad over and over, and realised that, "hey, now I can get an LLM (via Windsurf) to write this really simple thing for me!"

Behold, the Black Desert Online (BDO) Barter Planner!

Bartering is basically a trading (mini) game. You sail around trading stuff (...bartering it). BDO has a really really big map, and it's almost all non-instanced, so you can (literally) sail around for hours if (a) you want to for some reason; (b) you are bartering or hunting sea monsters.

In the "Item" column, I have the Item (survival kit) that I need to bring to a particular Location (Arakil island) to barter. It's grade "g" for green, and what I'll get in return for the barter is "box". The number of the item I need to bring is in the Quantity column.

This is a far from optimised setup. I don't even try to optimise distance and time, except to my own lackadaisical playstyle of "are those things in the same general vicinity". It doesn't have Margoria nor Valencia nodes because I've <.<; memorised all of those. Plus the "Crow Coin" option in the Barter UI in-game renders it unnecessary to track those nodes for trading. At least for me.

Features

  • Easily track the trade goods you need for your non-Margoria barters, grouped by proximity.
  • Search and filter by location names and codes.
  • View location codes (arbitrarily assigned by me) by clicking on "Location code" column header.
  • Clear rows when you've completed the barter.
  • Sort rows to the top as you fill them in, so your to-do barters are always visible.
  • Data is saved in LocalStorage, so you can open/close the file without worrying. Data is removed when you clear it.
  • Runs purely local on your machine.
  • No installation needed. Just unzip the file and open it in a web browser.

Download


Droplet aka ChatGPT (via Windsurf) wrote me a knock-off Airdrop/Snapdrop! :D

First iteration / MVP

Windsurf told me how to install Python, and wrote the base HTML and JS, plus the PY file needed to run the Droplet server locally. My original idea was to use the web browser's localstorage, but that didn't work out, not least because the amount of data I could store that way is puny. The first iteration was very ugly and unfriendly, as the text/instructions were written in a way that made sense only to me.

After testing with my partner helping with uploading files (it worked, yay MVP), then it was refinement time.

Later iterations

  • Add determinate loading bar during upload.
  • Make it prettier.
    Hm! I've heard about classless styling/boilerplate HTML frameworks, maybe I can use one of those!
    I ended up using the very lovely Water.css
  • Add understandable-to-humans instructions.
  • Support folder uploads.
  • Delete uploaded files.
    From the web app, instead of me deleting directly from Droplet's "upload" folder.
  • Display human-readable network device name of server.
  • Improved instructions (round 2).
  • Add QR code for easier mobile device access.
This was great fun, and I did manage to pick up a little bit about server side code, and improve my (very poor) JS knowledge a little to boot. And of course, now I have a Snapdrop replacement. ;)

Get a Droplet of your very own. :D

LLMs: The best software development tutor ever - with big caveats

I've recently started learning Python for fun, and I've manually copy-typed my way to my very first Streamlit app - CatGPT Nekomancer!

In the process, I've discovered some fascinating things about Large Language Models (LLMs) like ChatGPT, and how they fit into learning a new programming language.

I'm particularly tickled by how I (mostly) implemented CatGPT Nekomancer by blindly following instructions, and then used it to understand what I'd done. There's just something magical about making something, and then having it teach you how you made it.

LLMs are great at explaining what a piece of code does

This is because they're functioning purely as "translators". The translation task plays to the strengths of LLMs - statistical pattern matching. Good judgement is not needed, because we're not concerned with "how" or "should".

Here's an example of CatGPT explaining its source code, in response to the prompt "explain what this code does", followed by the code.

This explanation was a little too high-level for me. I wanted to really understand what each line of code was doing. This was easily fixed with a different prompt: "explain this code line by line to a novice programmer". This gave me exactly the level of detail I needed.
I've worked with some pretty great software developers from all over the world who aren't native English speakers. My partner and I are both bilingual, and we'd previously experimented a little with using LLMs as translators for recipes, where we found that if we were able to avoid the LLM's tendency to interpolate (aka "hallucinate" aka "lie"), they do much better at translating than Google Translate.

So when my partner asked, "What if we get it to explain in Croatian? This would have been HUGE for me when I was learning," it was a no-brainer to give it a try with this prompt: "explain this code line by line to a novice programmer, in Croatian." For Croatian, at least, my partner verified that CatGPT's translation and explanation was "brilliant".


I can also ask the LLM to explain specific parts of the code that I don't understand, and learn new concepts that way. For example, I wasn't familiar with the concept of f-strings in Python, which I encountered when working on a different Python experiment. Thanks to CatGPT, I was very quickly able to understand that f-strings are strings that can hold expressions - nifty! I particularly love how the LLM fits so seamlessly into my personal learning "flow". Instead of having to go off to the greater interwebs to trawl through answers about f-strings in Python to figure it out from there, I have my own personal tutor.

LLMs increase the value that good software developers bring to the table

It's generally agreed - at least among software development managers and similar roles - that a good developer can pick up a new language pretty quickly and competently. That's because the core of what makes a good developer isn't the knowledge of a particular language. Rather, it's their grasp of transferrable concepts, frameworks, and understanding of best practices as principles. It's not about rote memorisation - it's about good judgement powered by understanding and experience.

Before LLMs exploded on the scene, a good developer could have everything I listed above, but when picking up a new language, or working with one they're rusty at, there'd still be a lag due to needing to learn the basics of the language (syntax, etc). With LLMs that lag is much smaller, allowing the developer to bring their strengths to bear much faster.

Caveat 1: LLMs are bad at advising on how a feature or function should be implemented

LLMs are based on statistical pattern matching, which makes them great at translation. It's also what makes them bad at anything that requires judgement calls based on a larger and often ambiguous context. They're not always wrong about "shoulds", they're just right far less often than the average human developer.

I believe that this is also what makes LLMs very weak at software development stuff that's presentational, or isn't primarily about logic. HTML, CSS, and web accessibility all fall into the bucket of not being about logic, as well as operating in a large and ambiguous context. It also probably doesn't help that LLMs have ingested the interwebs, and even today, there's probably loads more sites styling button text with <span> tags than sites using the correct approach. It's not like the LLM can tell which approach is better. After all, it's not thinking - it's pattern matching based on statistics.

Caveat 2: LLMs can't coach or mentor

Real "personalisation" is needed for coaching and mentoring. Both of these require human judgement and experience about the subject matter, as well as the individual receiving the coaching or mentoring. They also require (arguably to a lesser extent) a wish on the part of the coach or mentor for the person they're working with to learn and succeed. The simulated thing that we've come to call "personalisation" (e.g. a script grabbing your name from a database) does not and cannot work in this context.

The key to using LLMs effectively when learning software development: Know what you need

If you need explanations on what a piece of code does, LLMs are a great and reliable help. Even more so if you're not a native English speaker - LLMs can function as your own personal translator for both language and code.

If you need good advice on what you should implement, then LLMs aren't going to help much. Quite the opposite. Since they lack judgement (indeed, they do NOT judge), what they come up with is likely to be misguided at best.

It all comes down to the age-old "common sense" wisdom of using the right tools for the job. :)