tag:nugget.posthaven.com,2013:/posts Nuggetty Goodness 2025-07-10T09:17:53Z tag:nugget.posthaven.com,2013:Post/2205505 2025-06-22T05:39:55Z 2025-07-10T09:17:53Z FrameQuery Figma plugin: Like CSS container queries in your Figma components and frames

I currently lead an enterprise digital Product Design team, and I'm also design co-lead (together with my dev co-lead) for our Design System. We already have multiple UI (code) components that use container queries. This is gonna be soooooooooo helpful on the design model side, because with this plugin, our designers don't have to remember when to manually swap layouts on our models.

First iteration took about 10-12 hours total, and about 100~ credits on my personal Pro account. Fixing the bugs took another 200 or so credits. Learned quite a bit of stuff along the way though. Adding support for components imported into libraries (without which the plugin is basically pointless) took another 300~ credits.

Still need to get it cleaned up, it's a mess, but at least the core bugs are fixed.

This is a working copy, and here's how to use it, if you're curious. :)

Load & use FrameQuery Figma Plugin

  1. In Figma desktop app ONLY, open a Figma file with components that you want frame queries on.
  2. Right-click on empty space in Figma canvas: Plugins > Development > Import Plugin from Manifest.
  3. In FrameQuery 1.0.30 > new-plugin > select manifest.json.
  4. Click on "Nugget's Frame Queries" and the plugin should load.
  5. In the component that you want to have frame queries on, add a new Property FQ-size. Needs to be exactly this string incl. capitalisation. You can name the variants anything you like, for a given value of "anything". Spaces are supported, but there are some other characters that might not be.
  6. FrameQuery should dynamically pull your variants from anything you set in FQ-size.
  7. With the component selected, turn FrameQuery on.
  8. Set your breakpoints. A max is needed for your biggest breakpoint, just make it something silly like 9999px.
  9. Pop a component instance onto the canvas, stick it in a frame, and resize the frame. The current version only cares about width, but I might enhance with height later.

FrameQuery appends 🤖 to component names that have FQ enabled, and prepends 🤖 to frames that contain component instances with FQ enabled, so we can keep track of 'em without messing up how our prototypes look in demos.

FrameQuery also works with components imported from libraries (1.0.30)

  1. Follow steps 1-8 from above in your library file, and publish the library.
  2. Close the library file (you don't need to have it open).
  3. In your target file, load FQ.
  4. Pop in the component instance from your library file, just like you normally would. This is the component with 🤖 appended to its name.
  5. Pop the component instance in a frame.
  6. 🤖 is prepended to the frame name, and the frame is now responsive.







]]>
tag:nugget.posthaven.com,2013:Post/2204033 2025-06-16T11:38:27Z 2025-07-10T09:17:33Z BDO Barter Planner

Proliferate little web apps, proliferate! Doesn't make up for the crap that's polluting the interwebs in terms of content, but I guess at least I can have my own little web apps now.

I got sick of writing the same stuff in Notepad over and over, and realised that, "hey, now I can get an LLM (via Windsurf) to write this really simple thing for me!"

Behold, the Black Desert Online (BDO) Barter Planner!

Bartering is basically a trading (mini) game. You sail around trading stuff (...bartering it). BDO has a really really big map, and it's almost all non-instanced, so you can (literally) sail around for hours if (a) you want to for some reason; (b) you are bartering or hunting sea monsters.

In the "Item" column, I have the Item (survival kit) that I need to bring to a particular Location (Arakil island) to barter. It's grade "g" for green, and what I'll get in return for the barter is "box". The number of the item I need to bring is in the Quantity column.

This is a far from optimised setup. I don't even try to optimise distance and time, except to my own lackadaisical playstyle of "are those things in the same general vicinity". It doesn't have Margoria nor Valencia nodes because I've <.<; memorised all of those. Plus the "Crow Coin" option in the Barter UI in-game renders it unnecessary to track those nodes for trading. At least for me.

Features

  • Easily track the trade goods you need for your non-Margoria barters, grouped by proximity.
  • Search and filter by location names and codes.
  • View location codes (arbitrarily assigned by me) by clicking on "Location code" column header.
  • Clear rows when you've completed the barter.
  • Sort rows to the top as you fill them in, so your to-do barters are always visible.
  • Data is saved in LocalStorage, so you can open/close the file without worrying. Data is removed when you clear it.
  • Runs purely local on your machine.
  • No installation needed. Just unzip the file and open it in a web browser.

Download



]]>
tag:nugget.posthaven.com,2013:Post/2202368 2025-06-08T02:51:00Z 2025-06-17T02:17:32Z Droplet aka ChatGPT (via Windsurf) wrote me a knock-off Airdrop/Snapdrop! :D

First iteration / MVP

Windsurf told me how to install Python, and wrote the base HTML and JS, plus the PY file needed to run the Droplet server locally. My original idea was to use the web browser's localstorage, but that didn't work out, not least because the amount of data I could store that way is puny. The first iteration was very ugly and unfriendly, as the text/instructions were written in a way that made sense only to me.

After testing with my partner helping with uploading files (it worked, yay MVP), then it was refinement time.

Later iterations

  • Add determinate loading bar during upload.
  • Make it prettier.
    Hm! I've heard about classless styling/boilerplate HTML frameworks, maybe I can use one of those!
    I ended up using the very lovely Water.css
  • Add understandable-to-humans instructions.
  • Support folder uploads.
  • Delete uploaded files.
    From the web app, instead of me deleting directly from Droplet's "upload" folder.
  • Display human-readable network device name of server.
  • Improved instructions (round 2).
  • Add QR code for easier mobile device access.
This was great fun, and I did manage to pick up a little bit about server side code, and improve my (very poor) JS knowledge a little to boot. And of course, now I have a Snapdrop replacement. ;)

Get a Droplet of your very own. :D


]]>
tag:nugget.posthaven.com,2013:Post/2039456 2023-10-23T07:56:44Z 2025-06-13T09:36:19Z How to get an accessible PDF out of Google Docs (with free tools)

Today, I learned that Google Docs doesn't save accessible PDFs, even if you conscientiously wrote the doc accessibly. I.e. With the correct heading structure, lists that are actual lists, tables that are tables, figures, captions, alt text, oh my!

Instead, when exporting to PDF, Google Docs strips all accessibility-related information, resulting in an untagged PDF.

This was rather annoying to me, since I absolutely needed this particular document (a VPAT) to be accessible while in PDF form.

Poking around the interwebs, I came to the conclusion that most PDF accessibility remediation tools are one of the following:

  • Paid and expensive
    Adobe Acrobat Pro, I'm looking at you. To be fair, it isn't just Adobe that charges rather a lot.
  • Free, but don't work at all, or don't work very well
    Pave-PDF was an example that didn't work for me at all... even when it finally loaded my document.
  • Free, but insert watermarks, and possibly don't work
    PDFix allowed me to tag my PDF, but I couldn't quite trust that it was working. Especially since PDFix finds "no bugs" with a PDF that... has no tagged content. To be clear, a PDF with no tagged content is not accessible. Plus, it inserts watermarks.
    We'll get back to PDFix in a moment though - it does come in handy.
  • Free, very possibly good, but Windows only
    My work machine is a Mac.

Enough with the complaining - tell me how to get that accessible PDF!

It's really simple, but I didn't find anyone else laying out the exact steps, so here they are. Every article or answer I found assumed access to specific paid tools, which I don't have (MS Word, Acrobat Pro).

  1. Ensure your base Google Doc has been authored accessibly.
    If it isn't, make it so. I.e. use the correct heading structure, add captions to your images, etc. This is 95% of the work, "pre-done", almost. And if you need to fix stuff, it's easiest to fix it in your base Google Doc, rather than attempt it with any free tools.
  2. Export your base Google Doc as a MS Word .doc. Yup.
    Because interestingly enough, when you export a Google Doc as an MS Word file, it preserves all of that tasty tasty accessibility information that you've included.
  3. If you have access to MS Word, open the file, and THEN export it as a PDF.
    According to the interwebs, this should give you a nice, clean, accessible PDF.
  4. If you're like me, and don't have access to MS Word...
    Go to ilovepdf.com - it's free.
  5. On the ilovepdf.com site, choose the "Word to PDF" option, and select your exported MS Word file.
  6. Wait for your file to convert, then download it.
  7. Congratulations! You now have a nice, accessible PDF!

    BUT WAIT... How can you be sure it worked?

  8. On a Mac, right-click your nice (and hopefully) accessible PDF, and "Get info".
  9. Look for "Tagged PDF" - if it says "Yes", it worked!
    (Note that depending on your Mac OS version, "Get info" may no longer return this data.)
    That's okay, you can check another way, that also lets you look at the tags, to make sure they're all legit, which is...
  10. Download and install PDFix.
    You're not going to use this to fix your PDF (because that'll make watermarks, among other issues), you're going to use it to check your PDF.
  11. Open your file in PDFix, and click on the "Tag" icon.
    This pretty much works exactly the same way as Adobe Acrobat Pro, except that it's free. ;)
  12. Check out the tags in your (hopefully) lovely, accessible PDF. :D
    Woohoo!



]]>
tag:nugget.posthaven.com,2013:Post/2002749 2023-07-22T03:17:15Z 2025-06-26T02:30:06Z LLMs: The best software development tutor ever - with big caveats

I've recently started learning Python for fun, and I've manually copy-typed my way to my very first Streamlit app - CatGPT Nekomancer!

In the process, I've discovered some fascinating things about Large Language Models (LLMs) like ChatGPT, and how they fit into learning a new programming language.

I'm particularly tickled by how I (mostly) implemented CatGPT Nekomancer by blindly following instructions, and then used it to understand what I'd done. There's just something magical about making something, and then having it teach you how you made it.

LLMs are great at explaining what a piece of code does

This is because they're functioning purely as "translators". The translation task plays to the strengths of LLMs - statistical pattern matching. Good judgement is not needed, because we're not concerned with "how" or "should".

Here's an example of CatGPT explaining its source code, in response to the prompt "explain what this code does", followed by the code.

This explanation was a little too high-level for me. I wanted to really understand what each line of code was doing. This was easily fixed with a different prompt: "explain this code line by line to a novice programmer". This gave me exactly the level of detail I needed.
I've worked with some pretty great software developers from all over the world who aren't native English speakers. My partner and I are both bilingual, and we'd previously experimented a little with using LLMs as translators for recipes, where we found that if we were able to avoid the LLM's tendency to interpolate (aka "hallucinate" aka "lie"), they do much better at translating than Google Translate.

So when my partner asked, "What if we get it to explain in Croatian? This would have been HUGE for me when I was learning," it was a no-brainer to give it a try with this prompt: "explain this code line by line to a novice programmer, in Croatian." For Croatian, at least, my partner verified that CatGPT's translation and explanation was "brilliant".


I can also ask the LLM to explain specific parts of the code that I don't understand, and learn new concepts that way. For example, I wasn't familiar with the concept of f-strings in Python, which I encountered when working on a different Python experiment. Thanks to CatGPT, I was very quickly able to understand that f-strings are strings that can hold expressions - nifty! I particularly love how the LLM fits so seamlessly into my personal learning "flow". Instead of having to go off to the greater interwebs to trawl through answers about f-strings in Python to figure it out from there, I have my own personal tutor.

LLMs increase the value that good software developers bring to the table

It's generally agreed - at least among software development managers and similar roles - that a good developer can pick up a new language pretty quickly and competently. That's because the core of what makes a good developer isn't the knowledge of a particular language. Rather, it's their grasp of transferrable concepts, frameworks, and understanding of best practices as principles. It's not about rote memorisation - it's about good judgement powered by understanding and experience.

Before LLMs exploded on the scene, a good developer could have everything I listed above, but when picking up a new language, or working with one they're rusty at, there'd still be a lag due to needing to learn the basics of the language (syntax, etc). With LLMs that lag is much smaller, allowing the developer to bring their strengths to bear much faster.

Caveat 1: LLMs are bad at advising on how a feature or function should be implemented

LLMs are based on statistical pattern matching, which makes them great at translation. It's also what makes them bad at anything that requires judgement calls based on a larger and often ambiguous context. They're not always wrong about "shoulds", they're just right far less often than the average human developer.

I believe that this is also what makes LLMs very weak at software development stuff that's presentational, or isn't primarily about logic. HTML, CSS, and web accessibility all fall into the bucket of not being about logic, as well as operating in a large and ambiguous context. It also probably doesn't help that LLMs have ingested the interwebs, and even today, there's probably loads more sites styling button text with <span> tags than sites using the correct approach. It's not like the LLM can tell which approach is better. After all, it's not thinking - it's pattern matching based on statistics.

Caveat 2: LLMs can't coach or mentor

Real "personalisation" is needed for coaching and mentoring. Both of these require human judgement and experience about the subject matter, as well as the individual receiving the coaching or mentoring. They also require (arguably to a lesser extent) a wish on the part of the coach or mentor for the person they're working with to learn and succeed. The simulated thing that we've come to call "personalisation" (e.g. a script grabbing your name from a database) does not and cannot work in this context.

The key to using LLMs effectively when learning software development: Know what you need

If you need explanations on what a piece of code does, LLMs are a great and reliable help. Even more so if you're not a native English speaker - LLMs can function as your own personal translator for both language and code.

If you need good advice on what you should implement, then LLMs aren't going to help much. Quite the opposite. Since they lack judgement (indeed, they do NOT judge), what they come up with is likely to be misguided at best.

It all comes down to the age-old "common sense" wisdom of using the right tools for the job. :)
]]>
tag:nugget.posthaven.com,2013:Post/1972273 2023-05-04T07:45:26Z 2025-06-16T14:31:08Z Peoplecraft

I'm proud to say that today, I came up with a new term for what I usually call "pplshit"! :D

Now, I call it "pplshit" because it doesn't come naturally to me, and it's not something I love doing. I've gotten decent at it over the years, within certain boundaries - enough so that I can provide some degree of training/coaching in that area.

Which has lead me to the realisation that when I talk to someone who likes doing that stuff, or I'm trying to train or coach someone to be better at it, then the name "pplshit" is probably not the most inspiring one.

And so I I will now henceforth call "pplshit"... "peoplecraft"!

Peoplecraft
The art of working within organisational and personal environments and dynamics to nudge and bind disparate teams and stakeholders into effectively collaborating on shared goals.

]]>
tag:nugget.posthaven.com,2013:Post/1865205 2022-08-07T03:14:23Z 2025-04-24T06:06:50Z Snow on the sahara

"Ma'am, I think we have a problem. This is a still from some footage captured by Midge Ourney. You know that Nat Geo photographer who's on shooting on location in Jebil park for the next three weeks."

"Is this some kind of joke?"

"No Ma'am. We've got multiple reliable corroborating witnesses. Quite a few of them are park rangers."

"But that's impossible! Even with climate change. It's got to be a hoax."

"Yes, that's what I thought too. But last night, one of our best people sent me this. They spotted this woman in Toual el-Hadhali and managed to snap a photo. Don't worry, they weren't spotted

"And you're sure this isn't a coincidence? Maybe some kids were having one of those dress-up party things, what do they call it, co-playing?"

"Afraid not."

"All right, thank you. Looks like we have a situation here. Just when things were finally starting to calm down too."



]]>
tag:nugget.posthaven.com,2013:Post/1855394 2022-07-15T08:25:28Z 2025-06-16T14:34:06Z Vegan cheese foam

But Nugget, aren't you a carnivore who loves cream?

Yes, I is! The nugget wasn't setting out to create a vegan cheese foam, it just sort of happened based on other requirements.

The foam is really stable, pretty fuss-free to put together, AND it tastes like cheese foam! All the cheese foam recipes I found either required planning ahead (softened cream cheese), or wouldn't taste like cheese at all. I like coconut milk, and I like maple syrup. I am completely unconvinced that the combination of the two tastes like any sort of cheese. :P

Requirements

  • Must not require planning (pft, wait for cream cheese to soften, pft).
  • Should ideally use ingredients with a long shelf life.
  • Must be brainlessly easy to assemble. Also fast.

Ingredients

  • 50 ml soy milk (I recommend Bonsoy, or Vitasoy manufactured in Taiwan)
  • 2 tsp white sugar (don't use a syrup, white sugar gives this its structure)
  • 1/4 tsp salt (or to taste)
  • 1/2 tsp nutritional yeast <-- this is what makes it taste like cheese

Make it!

  1. Put all ingredients in a microwave safe container, tall enough to whip the stuff in (you want a cheese foam after all).
  2. Stir until combined.
  3. Warm in microwave in 15s intervals (you want it warm, or hot, not boiling).
  4. Whip / froth with milk frother (I recommend Aerolatte) about 20-30s, you should have a stable foam.
  5. Pour on top of drink!

Notes on ingredients

  • Bonsoy, and the Vitasoy made in Taiwan do not contain oil. Soy milk w/o oil is what I grew up with, and I intensely dislike the mouthfeel of plant "milks" that have added oil.
  • You can probably substitute another plant milk for soy milk, but ideally choose one w/o a strong flavour.
  • If possible, choose a plant milk w/o the added oil (this could just be my prejudiced young age imprinting speaking haha).
  • Nutritional yeast can be found a most health food stores. It makes things taste like cheese w/o cheese being added. It's shelf stable, and doesn't need to be refrigerated.
  • Don't combine cow milk AND soy milk foam. The cow milk in the drink will kill the foam structure of the soy milk foam really fast.
  • You can substitute plant milk for cold heavy cream, whipped to soft peaks.
    When using heavy cream, don't heat it in the microwave, or it won't whip happily. Just stir the ingredients in, then whip. You will need to carefully spoon and ladle the foam onto your drink, and it may not stay stable for long if your drink is hot. Use dairy in your drink too (see previous point about cow milk killing soy milk foam).
  • Aerolatte milk frothers are awesome. I've had mine for almost 10 years now. I bought another one from a diff brand to use in the office a few years ago, and it's terrible. It doesn't make nice froths! 
]]>
tag:nugget.posthaven.com,2013:Post/1852417 2022-07-08T09:01:25Z 2024-07-11T13:58:07Z It's a bit depressing that the co-bot-art that's mostly bot is better than I ever was...

...but I was never that good anyway. Oh wells!

Final composite

Manual retouching and merging by the nugget.


Midjourney

Prompt: dark skinned magpie woman wearing intricate silver jewelry, trending on artstation, uplight


Real-ESRGAN Inference Demo

This is actually GFPGAN - for some reason the Colab page seems to be titled differently. GFPGAN is for face restoration.



]]>
tag:nugget.posthaven.com,2013:Post/1851404 2022-07-05T14:46:40Z 2024-07-11T13:59:22Z Midjourney is amazeballs. Prompt was "crow girl, trending on artstation".

Midjourney + light retouching

Midjourney isn't great at noses, so that was where the retouching was needed. Very simple job of masking the original nose with the retouched nose.

Midjourney original

Retouched version

Derived from running the original through an image restoration generated adversarial network - Real-ESRGAN Inference Demo. (This is actually GFPGAN - for some reason the Colab page seems to be titled differently. GFPGAN is for face restoration.)

 ^If you want to use this, you need to log into your Google account, and make a copy of it. Not 100% sure you have to make a copy, but you do need to be logged into your Google account.

Print quality is just another AI away

Cupscale to the rescue! Haven't added the output here, for obvious reasons. But after running the retouched version through Cupscale, I ended up with a 60MB PNG file that's super sharp even at 100%. No artifacts.


]]>
tag:nugget.posthaven.com,2013:Post/1819784 2022-04-17T08:29:10Z 2024-07-11T14:00:08Z Our very first (and only) home-grown blueberry.

]]>
tag:nugget.posthaven.com,2013:Post/1818998 2022-04-15T07:10:21Z 2025-07-08T15:02:35Z Bear and Nugget

I drew these years ago as part of our submission to Immigration for the bear's partner sponsorship visa. In Australia, Immigration requires you to write essays about each other, and "your life together". I figured essays must get kinda boring, so I added cartoons too.

...and then after a loooooooooooooooong pause (laziness, the sponsored visa was approved ages ago), here's a new one! I really like pruning and weeding. <.<; The observant will notice that in 6 years, we both grew 2 extra fingers...

]]>
tag:nugget.posthaven.com,2013:Post/1815478 2022-04-06T07:18:53Z 2025-05-08T07:22:19Z What's something that effective compliments and CSS have in common?

Specificity is the most !important part of the declaration.

<scamper scamper>


]]>
tag:nugget.posthaven.com,2013:Post/1791378 2022-02-04T10:10:09Z 2024-07-11T14:02:40Z Everything is better with eyes.

Even if it does kinda remind me of Marvel comics' Inferno arc from ages ago.

I would send this to Bellroy, but not sure it's the kind of "customer action shot" they'd like.

]]>
tag:nugget.posthaven.com,2013:Post/1779739 2022-01-05T23:29:13Z 2024-07-11T14:03:22Z How to make squeaky clean SVGs for use in applications

Procedure

  1. Open the SVG in Adobe Illustrator.
  2. Where possible, unite shapes with the same fill into compound paths.
  3. Where possible, outline strokes. This will make cleanup easier when we dig into the code.
  4. Remove any additional paths, bounding boxes, etc, that are not a visible part of the SVG.
  5. Scale the SVG to intended rendering size (e.g. if it’s a 24px icon, scale the SVG to be 24px).
    If you want the SVG to be in a 24px "bounding box", then make sure the canvas size is 24px, and your svg is your desired size within the canvas. The canvas functions as your "bounding box".
  6. Clean up sub-pixel alignments where feasible.
  7. Save SVG.

    SVGX
  8. Open the SVG that was just saved by Adobe Illustrator in SVGX.
  9. Select “Optimized” tab.
  10. Hit [Copy].

    VScode
  11. Open SVG that was just saved by Adobe Illustrator in Visual Studio Code (VScode).
    At this point, it’s possible that no image displays in SVGX’s preview, even if there’s code. That’s fine. Copy the code from the “Optimized” tab in SVGX.
  12. Paste the SVG code from SVGX below Adobe’s original SVG code.
    We’ll be using the code from SVGX, with a few tweaks based on the Adobe original.
  13. (Optional) Turn on word wrap.

    VScode - SVGX-pasted code
  14. Ensure the svg opening tag contains only xmlns and viewBox attributes.
  15. Add title tag and string directly below (outside of) the opening svg tag.
    Do describe what the image is. Don't describe what the image can be used to represent.
    <title>Speedometer</title> - Do
    <title>Dashboard</title> - Don’t
    What the image represents will be handled separately with aria-label or alt-text, depending on the implementation.

    Single-colour SVGs meant to be used as icons
  16. Do NOT declare the colour of any path(s) using the fill attribute.
    E.g. DON’T <path fill="#000" d="M22 11.8h-3.6V13h3l5.5 8.3h1.4z"/>
    This makes the svgs harder to dynamically colour.

  17. Single-colour SVGs meant to be used as images
    Declare the colour of any path(s) using the fill attribute.
    E.g. <path fill="#000" d="M22 11.8h-3.6V13h3l5.5 8.3h1.4z"/>
    This is for SVGs used as images, where there’s no intent to dynamically change the colour

    Multi-colour SVGs
  18. Declare the correct corresponding colour of each path using the fill attribute.
    SVGX-pasted code will often strip the first colour class, and fail to apply it to the first path associated with it. We’ll need to eyeball the path coordinates, and guess match up the colour in the class to the correct path.
    E.g. <path fill="#41B59D" d="M22 11.8h-3.6V13h3l5.5 8.3h1.4z"/>
  19. Remove style tags, and anything in them.
  20. Remove classes, and their values.
    We are replacing style and class with fill and the colour in the corresponding class.
  21. Delete the original code from Adobe Illustrator.
    We only want the code we pasted from SVGX, and then cleaned up.
  22. Save the SVG in VScode.
  23. Open the VScode edited SVG in SVGX.
  24. If the SVG looks as desired / expected - congrats! It’s clean! We can now add it to Iconset.

Troubleshooting

When I open my VScode edited SVG in SVGX, some paths don’t have the right colours.
It can be tricky to assign the right fills to paths, especially if there are a few of them. Try outlining strokes, and/or combining shapes with the same colours into compound paths in Adobe Illustrator before working with SVGX and VScode.

SVGX is showing a blank preview when I open my VScode edited SVG.
Test the paths by declaring a fill, like so.
<path fill="#41B59D" d="M22 11.8h-3.6V13h3l5.5 8.3h1.4z"/>
If you’re making an SVG that needs to be dynamically recoloured, remember to remove the fills after the preview looks good.

SVGX looks fine, but when I import into Iconset, it just shows a black square/circle/shape.
There may be an additional invisible bounding box / path from the original SVG, which is now being filled, and resulting in a black square/circle/shape. Either locate the invisible path in the SVG code and delete it (this can be hard), or delete the invisible path in Adobe Illustrator, and try again.



]]>
tag:nugget.posthaven.com,2013:Post/1761527 2021-11-20T03:51:39Z 2024-07-11T14:04:05Z Every Guild Wars I build I've ever written (all 350 of them)!

...on the off chance that someone, somewhere, somewhen will find them useful. Might even be future-nugget, though that's unlikely.

These builds are literally everything I ever found interesting enough to save, so there's no guarantees that any of them are good.

However, I've linked to my guides for the good ones in the spreadsheet.

350 guild wars I nuggetbuilds

tinyurl.com/nugguildwars1


]]>
tag:nugget.posthaven.com,2013:Post/1731310 2021-09-03T03:29:39Z 2024-11-12T03:49:07Z Strawberry goop shortbread cookie-tart aka tapioca flour is magical!

We've been thickening savoury sauces with tapioca starch for a while now. We like it better than cornstarch, because it doesn't muddy the flavour of things the way cornstarch does.

At some point, we decided to thicken a pie filling with tapioca starch. <.< There's no going back. Tapioca starch is magical in fruit filling type goops. It makes everything so wonderfully blobby and clear and pretty, without being sticky and tacky. And it even re-bakes nicely, if you want to stuff it in a puff pastry and bake it.

Strawberry yuzu goop

Makes about 500g of goop. Don't worry about measuring exactly. :P I don't really measure stuff, and this is all conjecture anyway haha. If you use too much tapioca starch, you'll just end up with a more solid and bouncy goop.

Ingredients

  • 500g strawberries, chopped
  • 30g~ of honey citron tea - brand doesn't matter, they're all nice (yuzu is wonderful with strawberries)
  • 30g~ tapioca starch
  • 30ml water
  • white sugar to taste (depends on how sweet your berries are)
  • ground cardamom to taste

Steps to reproduce

  1. Chop the strawberries into fingernail-sized bits. It's okay if your fingernails are giant or midget. All fingernail sizes are welcome.
  2. Dump chopped strawberries, sugar, honey citron tea, and ground cardamom in a small pot of your choice (needs to be big enough to hold all your strawberries, obviously).
  3. In a separate bowl, add water to the tapioca starch and swirl it around till it forms a slurry. Don't skip this step! If you just dump the tapioca flour into the pot with the rest, you'll end up with tapioca lumps.
  4. Add tapioca flour slurry to the rest of the stuff in the pot.
  5. Cook at low to medium heat for about 10 minutes, stirring pretty much all the time. Yeah, it sucks. :( I hate stirring.
    The tapioca slurry will look white at first, but once it's done, it'll turn clear.
  6. When the strawberries are squishy enough for your taste, and everything is goopy and clear, it's done.

The goop is great both hot and cold, and it reheats and bakes well. So once you have the goop, go on and GOOP ALL THE THINGS!

]]>
tag:nugget.posthaven.com,2013:Post/1711165 2021-07-07T06:10:53Z 2021-07-07T06:11:50Z Properly-built design system components are awesome.

Unfortunately, most UI-kits are not awesome, and so I end up having to roll my own - like this text input component.

Glad to be back to using Adobe XD after just about 2 years of Sketch-Hell.

]]>
tag:nugget.posthaven.com,2013:Post/1701017 2021-06-09T07:48:18Z 2021-06-09T07:48:19Z So for months now, I've been wondering...

...why does my microwave have an icon of a farting cat with a clock for a face?

Is it because cats lick their bowls really clean? Oh well, I like cats.


Today, it dawns on me what the "proper" interpretation of the icon is.


]]>
tag:nugget.posthaven.com,2013:Post/1609334 2020-10-28T10:18:38Z 2020-10-28T22:15:58Z Free COVID-19 customer logbook for small businesses

I made a very very very basic Airtable template for a COVID-19 customer logbook for small businesses.

Like many Victorians, I watch our Victorian Premier's (Dan Andrews) press conference near every day.

At one of the press conferences a couple of days ago, one of the reporters kept talking about "QR codes" for small businesses, as if QR codes are magical things that will somehow record everything when a customer scans em.

After that press conference, I was complaining to my partner, Does the reporter even know what a QR code does? If it doesn't redirect to a database, with form, etc, what's the point? How will a small shop set this up?

Then I realised, Hey, I happen to know this no-code tool... (Airtable)......and this kinda happened.

The bulk of the work was writing the instructions in a way that normal people can understand and follow.

https://airtable.com/universe/expzohzqb7PE07lhl/covid-19-logbook



]]>
tag:nugget.posthaven.com,2013:Post/1604893 2020-10-16T06:16:41Z 2025-01-23T03:02:49Z "Click the link we sent in your email to log-in" turns every log-in into "reset your password". :| ]]> tag:nugget.posthaven.com,2013:Post/1603892 2020-10-13T05:25:34Z 2024-02-04T15:31:47Z That old chestnut again: Should designers code? No... and yes. ;)

Some ponderings as I learn the wonders of CSS-grid, fluid typography, and all the shiny new toys kids these days have.

Gosh, CSS has gotten so much nicer since the days when we had to haul water to the top of the hill both ways barefoot in the snow.

No, designers shouldn't code
I don't think designers need to be able to write production quality code. It stands to reason that I have a vested interest in this "no", as I haven't committed production code in over a decade. Plus, production quality code, especially at an enterprise-level, is a completely different beast from building a small static website. When it comes to enterprise code, scalability, maintainability, extensibility are all very important - and I prefer to leave them to the experts (my developers).

Yes, designers should code
Ideally, designers should have some familiarity with, and understanding of the basic "materials" used to build the digital products they design. Additionally, the "materials" will vary, even across digital products. Just because I can write js and css certainly does not mean I know the "materials" for native Windows, Mac, Android, or Linux.

With that as the caveat - being able to code just enough to know my materials is a very big plus. I did a basic Vue course fairly recently. Nothing fancy, just a single page app. However, what I learnt from that course gave me a much better idea of how Vue (and React, and Angular) work at a very high level, and how that can translate into implementation. It also made it collaborating with front-end web developers easier, as we had some degree of shared knowledge.

I've also been experimenting with the "new" (not so new, I know) CSS toys all the kids have these days. What's really cool about this is that unlike the Vue course, what I'm learning about CSS is changing the way I think and design - and think about design. These learnings change the bounds of what I know are possible.

For example, I have been reading about fluid typography on the web for a couple of years now - and before I started poking around the code, it's been a very abstract sort of interest. E.g. "Nice and interesting abstract concept, I should try to design for that when I have the opportunity". Now that I've poked around the code, and gotten a basic understanding of how things work, this has changed to a much more real and practical, "ZOMG now that I actually know how that bit of code holds together, I can actually set a typographic scale that way, and see it work. And I can see how I could make it work in so many places. Waoohh!"

Here's my supernoob code-pen, which I'm modifying on the fly as I learn more about css-grid and fluid typography.
All the noob inline comments, every noob inline comments!

See the Pen Flying Red Horse - CSS-Grid Experiments by JC (@nuggettyone) on CodePen.

]]>
tag:nugget.posthaven.com,2013:Post/1600306 2020-10-04T00:14:57Z 2020-10-04T00:14:57Z Litmus test for this question: Does this company value customer research?

Do the employees who do customer research use qualitative analysis tools that are paid for by the company?

]]>
tag:nugget.posthaven.com,2013:Post/1598869 2020-09-30T03:48:58Z 2020-09-30T07:27:33Z Design systems, systems thinking, and the "curse of the gifted"

A friend of mine calls it "the curse of the gifted" -- a tendency to lean on your native ability too much, because you've always been rewarded for doing that and self-discipline would take actual work.

You are a brilliant implementor, more able than me and possibly (I say this after consideration, and in all seriousness) the best one in the Unix tradition since Ken Thompson himself.  As a consequence, you suffer the curse of the gifted programmer -- you lean on your ability so much that you've never learned to value certain kinds of coding self-discipline and design craftsmanship that lesser mortals *must* develop in order to handle the kind of problem complexity you eat for breakfast.

But you make some of your more senior colleagues nervous.  See, we've seen the curse of the gifted before.  Some of us were those kids in college.  We learned the hard way that the bill always comes due -- the scale of the problems always increases to a point where your native talent alone doesn't cut it any more.  The smarter you are, the longer it takes to hit that crunch point -- and the harder the adjustment when you finally do.  And we can see that *you*, poor damn genius that you are, are cruising for a serious bruising.

As Linux grows, there will come a time when your raw talent is not enough.  What happens then will depend on how much discipline about coding and release practices and fastidiousness about clean design you developed *before* you needed it, back when your talent was sufficient to let you get away without.

http://lwn.net/2000/0824/a/esr-sharing.php3

How Linus Torvalds works (as written in this post) is how I cook. ;) I don't measure, I eyeball everything. I do things until they "look right" and then I stop.

I would never ever ever do that fast, loose, play-it-by-eye-and-natural-talent with a design system, because I have learned that it just doesn't scale. In fact, I design... design systems specifically to avoid or mitigate the "curse of the gifted".

At the end of the day, a design system that isn't easy enough to use by all involved in the SDLC, so much so that it becomes the default pit of success that teams fall into together... is a failed design system. Or one that is currently failing, at any rate. Failures can be remedied, so there is that.

I see a disturbing tendency to approach design systems (especially for enterprise) with far too much reliance on 'the curse of the gifted'. This is especially evident when every piece of a design system is designed on an ad-hoc basis, with no regard to how it fits into everything else. If the designer's eye is good enough, they can skate by on the curse of the gifted - until they cannot, anymore. And their teams? God help their teams. 

When it comes to design systems - I want the opposite of skating by on talent. I want to help build design systems that enable delivery teams to fall into the pit of success together. The kinds of systems that improve collaboration for everyone, because of a clear, shared understanding of what it is we want to do, and what we have on hand to do it. Design systems where the default is usually the right choice, where guesswork is kept to a minimum... and yet where necessary changes can be made on the fly with the minimum of cost, or drama. And while I'm describing my ideal pony, I want to be able to stay and see such a system grow and evolve over time.

Right now, I'm still looking for that pony. Maybe I'll get lucky. ;)
]]>
tag:nugget.posthaven.com,2013:Post/1446280 2019-08-19T04:31:20Z 2022-01-28T03:18:08Z Things to consider when defining default animation timings and easings for user interfaces
I meant to rewrite this nicely a long time ago, never got down to re-writing it. And yet... it seems like it could help people. So here's the original braindump version.




Default timing for animations == 200ms. Default easing == Ease In-Out


This is the simplest way to achieve what we want, in terms of animation.

What we are doing here can be considered "semantic" animation, as it is the animations that "explain" to the user why and how the map is being displayed to them. The equivalent is a book opening, or a page turning.

If possible, make the timings and the animation types customisable, with defaults.


Why 200ms?
Because when it comes to things we interact with mostly visually, 200ms is the amount of time it takes the human brain to register that something has come into view, or changed. At 200ms, UI animation feels almost instant.

Now, we could get pickier, and make it 300-400ms for XS, SM, and 200ms for MD, L, XL, because the distance travelled by the animated objects also influences how fast they feel. However, this lies in the realm of "fancy extras". They are nice, but if we start doing that, then to be consistent, we need to do it EVERYWHERE.

So 200ms. ;) For everything.

Please note that this is a visual thing. If you are writing with a stylus, and the lag is 200ms (or you're gaming! ;) ) 200ms, is way way way too slow. When writing with a digital stylus, or drawing with a mouse, the latency has to be as close to 0 as possible. Even 100ms feels laggy when you draw with a stylus or a mouse.


Why Ease In-Out?
Easing makes animations look more natural, by visually mimicking the laws of physics.

We're using Ease In-Out as the default, as it mimics physics, AND is less confusing for most people who aren't professional animators.

But whyyyyy! ;) Ok here's why...

If we wanted to be picky, when something animates INTO view, we should use Ease-Out. Ease-Out is when something moves fast at first, and then slows down. This mimics physics in the real world, where stuff loses momentum as it moves, due to friction. So it starts fast (because as it's animating into our view, it's already moving), and as it runs out of energy, it slows down.

Likewise, when something animates OUT OF view, we should use Ease-In. Ease-In is when something moves slowly at first, and then speeds up. Again, this mimics physics in the real world, where it takes time for stuff to build up momentum, and then it goes faster as it collects enough energy to overcome friction. So it starts slow (because it's charging up, while still being in our view), and then the animation speeds up as the item leaves our view.

^And yes, for stuff that comes INTO view, it's better to use Ease-OUT, for stuff that LEAVES our view, it's better to use Ease-IN. Yes. It's horribly confusing.

Ease In-Out to the rescue! ;) Our :X compromised best of both worlds. It's not really... but it's the least confusing to remember.

Ease In-Out is when something moves slowly at first, gets quicker... then slows back down. In UI terms, this is a nice compromise if we don't wanna be fancy, because as humans, we're lazy about how we perceive things.

When we use Ease In-Out for something that animates INTO view, the human-goldfish mind has most likely already stopped paying attention before the last "slow" in the slow-fast-slow sequence. So to the human-goldfish, most of the time it'll look like an Ease-Out.

When we use Ease-In-Out for something that animates OUT of view, the human-goldfish mind (you see where this is going) takes a while to notice what's going on. ;) So it likely doesn't notice the first "slow" in the slow-fast-slow sequence.*

*Updated on 28 Jan 2022 to change ease in-out to accurately reflect a slow-fast-slow sequence. I got sequence of what happens with ease in-out wrong originally, but using it still works, because the human-goldfish principle still applies.

Some samples, using 200ms and (naughty naughty, ALL ease-in).
Each click as shown is triggering a new 200ms animation to the next keyframe.

]]>
tag:nugget.posthaven.com,2013:Post/1408799 2019-05-14T01:39:31Z 2019-05-14T01:39:45Z The whole UX vs UI thing is like a crazy argument about whether the sugar in a cake is *really* part of the cake, or not. ]]> tag:nugget.posthaven.com,2013:Post/1364795 2019-01-19T12:35:59Z 2019-05-13T23:21:15Z Black Desert Online - Enhancing %

Green ship gear

+1 attempt failstacks

  • 0 == 66.67%

+2 attempt failstacks

  • 0 == 44.44%
  • 2 == 53.33%

+3 attempt failstacks

  • 0 == 29.63%
  • 2 == 35.56%
]]>
tag:nugget.posthaven.com,2013:Post/1298063 2018-06-29T10:41:49Z 2024-09-19T10:39:21Z Black Desert Online - Amity sequences

These are what I used to get to 1000 amity with each of these NPCs. Not saying it's the best, just what I used. ;)

Best used in conjunction with this list of amity knowledge NPCs. Note that some of the ones I list are traders, and not amity knowledge NPCs.

Also related - knowledge locator.

]]>
tag:nugget.posthaven.com,2013:Post/1269154 2018-04-05T12:31:00Z 2024-06-19T03:54:17Z I blame Marvel Heroes 2017 *sniff* for my BDO toons...

]]>
tag:nugget.posthaven.com,2013:Post/1203998 2017-11-08T08:05:05Z 2017-11-08T08:05:05Z Figma review in one sentence!

I don't know how people can make tools for interaction designers, and not allow them to do any interactions except clicks!



]]>