• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: August 10th, 2023

help-circle

  • The point of using a cache is to have data in memory and not on disk. From what I can tell, Postge Unlogged tables are still written to (and read from) disk. It is just that the write is done in an unsafe way.

    The article explains how one would go about benchmarking performance but forgets to actually include performance metrics. Luckily they link to another write up that does. Using an Unlogged table vs. a regular table reduces write times about 45% and gives you about 3 times as many transactions per second. It is not nothing but it is probably not worth the code complexity vs. writing directly to a persistent table.

    Even the “no persistence” behavior of a cache is not strictly true: an unlogged table is only truncated if Postgre is shut down unexpectedly (by kill -9 the process or by killing the VM). If you restart if you shut down the process in a controlled manner, the unlogged table is properly persisted and still has data when it starts.


  • I have used Kotlin a bit for a hobby project and it felt like they were 95% done with a 1.0 version. I love the promise of a single code base that can run on the JVM and browser, but it is not all there. Until recently, the API was not guaranteed to be stable. Every one in a while, I hit a feature that is JVM only or does work right in JavaScript. The JS compiler will “helpfully” remove uncalled public functions unless you explicitly mark them with JsExport.

    Also, from what I can tell, only InteliJ is the only supported IDE (which makes sense, since they are the language developers). There is an official Eclipse Plugin, but the last time I tried it, it did not work and tried to take the entire IDE down with it.

    Having said that, it was very close to complete and I have not worked on that project for a few months, so it could all be perfect now.



  • As someone who was a web developer since the mid-2000’s (and not more recently), an HTML first approach speaks to me. I am still of the belief that your contents should be in HTML and not pulled in via JavaScript.

    The article is a bit self contradictory. It encourages specifying style and behavior inline and not using external styles and scripts but also discourages using a website build pipeline or dynamically generated HTML. So how can you maintain a consistent look and feel between pages? Copy and paste?


  • If you are creating an alternative implementation and leaving the old one in place, you are not fixing a problem, you are just creating a new one (and a third one because you have duplication of logic).

    Either refactor the old function so that it transparently calls the new logic or delete the old function and replace all the existing usage with usage of the new one. It does not need to happen as a single commit. You can check in the new function, tell everyone to use it, and clean up usage of the old one. If anyone tries to use the old implementation, call them out in a code review.

    If removing or replacing the old implementation is not possible, at least mark it as deprecated so that anyone using it gets a warning.







  • I have not done much GoLang development, but I am working on automating some dependency updates for our kubernetes operator. The language may be good, but the ecosystem still feels immature.

    Too many key libraries are on version 0.X with an unstable API. Yes, semantic versioning does say that you can have breaking changes in minor (and patch) releases as long as the major version is still 0, but that should be for pre-release libraries, not libraries ment for production use.


  • We tried to ask our interview question of ChatGPT. After some manual syntax fixes, it performed about as well as a mediocre junior developer, i.e. writing mutithreaded code without any synchronization.

    Don’t misunderstand, it is an amazing technical achievement that it could output (mostly) correct code to solve a problem, but it is nowhere good enough for me to use. I would have to carefully analyze any code generated for errors, rewrite bits to improve readability (rename variables to match our terminology, add comments, etc), and who knows what else. I am not sure it will save me much time and I am sure it will not be as good as my own code. I could see using an AI to generate sophisticated boiler plate code (code that is long, but logically trivial).



  • The immediate use for this that jumps out at me is batch processing: you take n inputs and return n outputs, where output[i] is the result of processing input[i]. You cannot throw since you still have to process all of the valid input.

    This style also works for an actor model: loosely coupled operations which take an input message and emit an output message for the next actor in the chain. If you want to be able to throw an exception or terminate prematurely, you would have to configure an error sink shared by all of the actors and to get the result of an operation, you so have to watch for messages on both the final actor and the error sink.


  • I knew basic CLI commands (such as cd and ls) for a while, but did not do learn much more. Some things have helped me grow my skills:

    • Necessity: Some times I need to do something on a VM or container that does not have a graphical interface installed. Some utilities only have a command line interface and not a graphical client. My only option is to Google how to do it. The more I do it, the less I have to Google and the more focused my searches become (instead of searching for “How to do x”, I search for “How to do x in utility”).
    • Learning from others: For many tasks, I follow internal or external guides, which typically use CLI commands. Often I look at how my coworkers accomplish tasks and pay attention to what commands they use. Then, when I have time, I look up any new commands I saw and decide if they will be useful for me too. Lately, I have been doing code reviews that involve shell scripts. Those are especially nice, because I can take my time, going line by line, and understand what each command does.
    • Keep notes: Every time I find a command that I think I will need again, I copy it into a text file (and I have many such text files). It also makes it easier when I need to run the command with slightly different arguments (a different commit id or something), I can just edit the command in my editor (with searching and undo) and paste it in to my terminal with all the flags and arguments correct.



  • My favorite YOLO-Driven Development practice (from a former employer) was Customers as QA. We would write code, build the code, and ship it to the customer, then the customer would run the code, file bugs for what broke, and we would have a new build ready next week.

    It provides many benefits:

    • No need to hire QA engineers.
    • Focuses developer debugging time on features actually used by customers instead of corner cases that no customer is hitting.
    • Developers deliver features faster instead of wasting time writing automated tests.
    • Builds are faster because “test” stages are no-op.

    One time a developer was caught writing automated tests (was not sure in the correctness of his code, a sign of a poor developer). Our manager took 15 minutes out of his busy day to yell at him about wasting company resources and putting release timelines in jeopardy.


  • I have not read the PEP itself or the PEPs that they claim to simplify, but this feel like a very bad idea that only really benefits Meta and a few other mega servers. It is enabling a micro-optimization that is only usable in a niche use case (forking long running processes with shared memory). Unfortunately, it is making all other python applications, on average, 2% slower. This is a minor regression but it hurts everyone and benefits almost no one.