• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle


  • Assuming C/C++, dare we even ask what this teacher uses instead of switch statements? Or are her switch statements unreadable rat’s nests of extra conditions?

    This is a good life lesson. We’re all idiots about certain things. Your teacher, me, and even you. It’s even possible to be a recognized expert in a field yet still be an idiot about some particular thing in that field.

    Just because some people use a screwdriver as a hammer and risk injuring themselves and damaging their work, that’s not a good reason to insist that no-one should ever use a screwdriver under any circumstances, is it?

    Use break statements when they’re appropriate. Don’t use them when they’re not. Learn the difference from code that many other people recommend, like popular open-source libraries and tutorials. If there’s a preponderance of break statements in your code, you may be using a suboptimal approach.

    But unfortunately, for this course, your best bet is to nod, smile, and not use any break statements. Look at it as a personal learning experience; by forcing yourself sit down and reason out how you can do something without using break statements, you might find some situations where they weren’t actually the best solution. And when you can honestly look back and say that the solution with break statements is objectively better, you’ll be able to use that approach with greater confidence in the future.


  • I completely agree. And the video didn’t discuss how any of that actually happens, except to say that they send the update over radio, and to give a brief description of how the storage system on Voyager works (physically, not logically). That’s what I meant by “really nothing here”, “here” meaning “in the video”, not “in how the Voyager probe works and updates are carried out”.

    That next line, “It turns out they they update the software by sending the update by radio,” was meant to be a bit sarcastic, but I know that isn’t obvious in text, so I’ve added a signifier.


  • This is a short, interesting video, but there’s really nothing here for any competent programmer, even a fresh graduate. It turns out they they update the software by sending the update by radio (/s). The video hardly goes any deeper than that, and also makes a couple of very minor layman-level flubs.

    There is a preservation effort for the old NASA computing hardware from the missions in the 50s and 60s, and you can find videos about it on YouTube. They go into much more detail without requiring much prior knowledge about specific technologies from the period. Here’s one I watched recently about the ROM and RAM used in some Apollo missions: https://youtu.be/hckwxq8rnr0?si=EKiLO-ZpQnJa-TQn

    One thing that struck me about the video was how the writers expressed surprise that it was still working and also so adaptable. And my thought was, “Well, yeah, it was designed by people who knew what they were doing, with a good budget, lead by managers whose goal was to make excellent equipment, rather than maximize short-term profits.”


  • Some of the things you mentioned seem to belong more properly in the development environment (e.g. code editor), and there are plenty of those that offer all kinds of customization and extensibilty. Some other things are kind of core to the language, and you’d really be better off switching languages than trying to shoehorn something in where it doesn’t fit.

    As for the rest, GCC (and most C/C++ compilers) generates intermediate files at each of the steps that you mentioned. You can also have it perform those steps atomically. So, if you wanted to perform some extra processing at any point, you could create your own program to do so by working with those intermediate files, and automate the whole thing with a makefile.

    You could be on to something here, but few people seem to take advantage of the possibilities that already exist, and combining that with the fact that most newer languages/compilers deliberately remove these intermediate steps, this suggests to me that whatever problems this situation causes may have other, existing solutions.

    I don’t know much about them myself, but have you read about the LLVM toolchain or compiler-compilers like yacc? If you haven’t, it might answer some questions.


  • Drawing on Japanese, which is the only non-English language I have significant experience with, object.method(parameter) would feel more natural as object.(parameter)method, possibly even replacing the period separator with a Japanese grammatical construct (with no equivalent in English) that really suits this use case. Even the alternative function(self, parameter, ...) would mesh better with natural Japanese grammar as (self、parameter、〜)function. The majority of human languages have sentences which run Subject-Verb-Object, but a handful which includes Japanese run in the order Subject-Object-Verb.

    I gave an example of an alternative for...in loop in another comment here, so I won’t rehash it here. But following the general flow of Japanese grammar, that for at the beginning of the statement would feel much more natural as a (or “with”) at the end of the statement, since particles (somewhat similar to prepositions in English) go after the noun that they indicate, rather than before. And since semicolons don’t exist in Japanese either, even they might be replaced with a particle like “”.

    There aren’t any big problems here, but a plethora of little things that can slowly add up.


  • I’m no linguist, but I have some Japanese language ability, and Japanese seems to be pretty different, grammatically, from English, so I’ll draw on it for examples. I also had a quick look at some Japanese-centric programming languages created by native speakers and found that they were even more different than I’d imagined.

    Here’s a first example, from an actual language, “Nadeshiko”. In pseudo-code, many of us would be used a statement like the following:

    print "Hello"
    

    Here’s a similar statement in Nadeshiko, taken from an official tutorial:

    「こんにちは」と表示
    

    A naive translation of the individual words (taking some liberties with English) might be:

    "Hello" of displayment
    

    I know, I know, “displayment” isn’t a real English word, but I wanted to make it clear that the function call here isn’t even dressed up as a verb, but a noun (of a type which is often used in verb phrases… it’s all very different from English, which is my point). And with a more English-like word order, it would actually be:

    displayment of "Hello"
    

    Here’s another code sample from the same tutorial:

    「音が出ます!!」と表示。
    1秒待つ。
    「プログラミングは面白い」と話す。
    

    And another naive translation:

    "Sound comes out!!" of displayment.
    1 second wait.
    "Programming is interesting" of speak.
    

    And finally, in a more English-like grammar:

    displayment of "Sound comes out!!."
    wait 1 second.
    speak of "Programming is interesting".
    

    And here’s a for…in loop, this time from my own imagination:

    for foo in bar {  }
    

    Becomes:

    バーのフーで {  }
    

    Naively:

    Bar's Foo with {  }
    

    More English-y:

    with foo of bar {  }
    

    You may have noticed that in all of these examples, the “Japanese” code has little whitespace. Natural written Japanese language doesn’t use spaces, and it makes sense that a coding grammar devised by native speakers wouldn’t need any either.

    Now, do these differences affect the computer’s ability to compile/interpret and run the code? No, not at all. Is the imposition of English-like grammar onto popular programming languages an insurmountable barrier to entry for people who aren’t native English speakers? Obviously not, as plenty of people around the world already use these languages. But I think that it’s an interesting point, worth considering, in a community where people engage in holy wars over the superiority or inferiority of various programming languages which have more in common than many widely-spoken natural languages.


  • it shouldn’t matter that much what language the keywords are in

    Another problem is that the grammars of many well-supported programming languages also mirror English/Romance language grammars. Unfortunately, dealing with that is more than just a matter of swapping out keywords.

    EDIT: I may have been unclear; I wasn’t trying to imply that this problem is greater than or even equal to the lack of documentation, tutorials, libraries, etc. Just that it’s another issue, aside from the individual words themselves, which is often overlooked by monolingual people.





  • There are several reasons that people may prefer physical games, but I want people to stop propagating the false relationship of “physical copy = keep forever, digital copy = can be taken away by a publisher’s whim”. Most modern physical copies of games are glorified digital download keys. Sometimes, the games can’t even run without downloading and installing suspiciously large day 0 “patches”. When (not if) those services are shut down, you will no longer be able to play your “physical” game.

    Meanwhile GOG, itch, even Steam (to an extent), and other services have shown that you can offer a successful, fully digital download experience without locking the customer into DRM.

    I keep local copies of my DRM-free game purchases, just in case something happens to the cloud. As long as they don’t get damaged, those copies will continue to install and run on any compatible computer until the heat death of the universe, Internet connection or no, just like an old PS1 game disc. So it is possible to have the convenience of digital downloads paired with the permanence that physical copies used to provide. It’s not an either-or choice at all, and I’m sick of hearing people saying that it is.



  • I think that a game has to be “purchaseable” for $0 to have a “claim” option. If it’s just “free”, it won’t have a “claim” option. Some authors seem to switch this setting on purpose to stop people from keeping free download access to their game after the free period has ended. Sure, we could just keep the files backed up ourselves, but a) how many people really do that, and b) the author could release updates and DLC later to encourage a purchase.



  • It really depends on your expectations. Once you clarified that you meant parity with current consoles, I understood why you wrote what you did.

    I’m almost the exact opposite of the PC princesses who can say with a straight face that running a new AAA release at anything less than high settings at 4K/120fps is “unplayable”. I stopped watching/reading a lot of PC gaming content online because it kept making me feel bad about my system even though I’m very happy with its performance.

    Like a lot of patient gamers, I’m also an older gamer, and I grew up with NES, C64, and ancient DOS games. I’m satisfied with medium settings at 1080/60fps, and anything more is gravy to me. I don’t even own a 4K display. I’m happy to play on low settings at 720/30fps if the actual game is good. The parts in my system range from 13 to 5 years old, much of it bought secondhand.

    The advantage of this compared to a console is that I can still try to run any PC game on my system, and I might be satisfied with the result; no-one can play a PS5 game on a PS3.

    Starfield is the first game to be released that (looking at online performance videos) I consider probably not being worth trying to play on my setup. It’ll run, but the performance will be miserable. If I was really keen to play it I might try to put up with it, but fortunately I’m not.

    You could build a similar system to mine from secondhand parts for dirt cheap (under US$300, possibly even under US$200) although these days the price/performance sweet spot would be a few years newer.


  • I think that it’s because a) the abstraction does solve a problem, and b) the idealized solutions aren’t actually all that simple.

    But I still agree with the article because I also think that a) the problem solved by the added abstraction isn’t practical, but emotional, and b) the idealized solutions aren’t all that complex, either.

    It seems to me that many devs reach immediately for a tool or library, rather than looking into how to create their own solution, due more to fear of the unknown than a real drive for efficiency. And while learning the actual nuts and bolts of the task is rarely going to be the faster or easier option, it’s frequently (IMO) not going to be much slower or more difficult than learning how to integrate someone else’s solution. But at the end of it you’ll have learned a lot more than you would’ve by using a tool or library.

    Another problem in the commercial world is accountability to management.

    Many decades ago there used to be a saying in tech: “No-one ever got fired for buying IBM.'” What that meant was that even if IBM’s solution was completely beaten by something offered by one of their competitors, you personally may still be better off overall going with IBM. The reason being, if you went with the competitor, and everything worked out, the less tech-savvy managers were just as likely to pat you on the back as to assert that the IBM solution would’ve been even better. If the competitor’s solution didn’t meet expectations, you’d be hauled over the coals for going with some cowboy outfit instead of good old reliable IBM. Conversely, if you went with IBM and everything worked, everyone would be happy. But if you chose IBM and the project failed, it’d be, “Well, it’s not your fault. Who could’ve predicted that IBM wouldn’t come through?”

    In the modern era, replace “IBM” with the current tool-of-the-month, and your manager will be demanding to know why you’re wasting time reinventing the wheel on the company’s dime.


  • I think a part of it is how we look for information in the first place. If you search/ask “How do I do (task) in (environment)?”, you’re going to find out about various libraries/frameworks/whatever that abstract everything away for you. But if you instead look for information on “How do I do (task)?”, you’ll probably get more generalized information that you can take and use to write your own stuff from scratch. Try only to look for help related to your specific environment/language when you have a specific implementation issue, like how to access a file or get user input.

    We also need a willingness to learn how things actually work. I see quite a few folks who seem to be so worried that they’ll never be able to understand some task that they unwittingly spend almost as much or even more time and effort learning all the ins and outs of someone else’s codebase as a way to avoid what they see as the scarier unknown.

    Fortunately, I’ve seen an increase in the last year or two of people deliberately giving answers or writing tutorials that are “no-/low-library”, for people who want to know what’s actually going on in their programs.

    I would never say to avoid all libraries or frameworks, because many of them are well-written (small, modular, stable) and can save us a lot of boilerplate coding. But there are at least as many libraries which suffer from “kitchen-sinkism”, where the authors want so much for their library to become the pre-eminent choice that it becomes a bloated tangle, trying to be all things to all people. This can be compounded by less-experienced coders including multiple huge libraries in one program, using only a fraction of each library’s features without realizing that there’s almost complete overlap. The cherry on top is when the end developer uses one of these libraries to do just one or two small tasks that could’ve been done in less than a dozen lines of standard code, if only someone had told them how, instead of sending them off to install yet another library.