• 5 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle
  • Yeah I’m thinking this one may be special cased, perhaps they wrote a generator of river crossing puzzles with corresponding conversion to “is_valid_state” or some such. I should see if I can get it to write something really ridiculous into “is_valid_state”.

    Other thing is that in real life its like “I need to move 12 golf carts, one has low battery, I probably can’t tow more than 3 uphill, I can ask Bob to help but he will be grumpy…”, just a tremendous amount of information (most of it irrelevant) with tremendoustremendous possible moves (most of them possible to eliminate by actual thinking).



  • Pre-LLM, I had to sit through one or two annual videos to the sense of “dont cut and paste from open source, better yet don’t even look at GPLd code you arent working on” and had to do a click test with questions like “is it ok if you rename all the variables yes no”. Ohh and I had to run a scanning tool as part of the release process.

    I don’t think its the FSD they would worry about, but GPL especially v3. Nobody gives a shit if it steals some leetcode snippet, or cuts and pastes some calls to a stupid API.

    But if you have a “coding agent” just replicating GPL code wholesale, thousands and thousands of lines, it would be very obvious. And not all companies ship shitcode. Apple is a premium product and ages old patched CVEs from open source cropping up in there wouldn’t be exactly premium.





  • Other funny thing: it only became a fully automatic plagiarism machine when it claimed that it wrote the code (referring to itself by name which is a dead giveaway that the system prompt makes it do that).

    I wonder if code is where they will ultimately get nailed to the wall for willful copyright infringement. Code is too brittle for their standard approach, “we sort of blurred a lot of works together so its ours now, transformative use, fuck you, prove that you don’t just blur other people’s work together, huh?”.

    But also for a piece of code, you can very easily test if the code has the same “meaning” - you can implement a parser that converts code to an expression graph, and then compare that. Which makes it far easier to output code that is functionally identical to the code they are plagiarizing, but looks very different.

    But also I estimate approximately 0% probability that the assholes working on that wouldn’t have banter between themselves about copyright laundering.

    edit: Another thing is that since it can have no own conception of what “correct” behavior is for a piece of code being plagiarized, it would also plagiarize all the security exploits.

    This hasn’t been a big problem for the industry, because only short snippets were being cut and pasted (how to make some stupid API call, etc), but with generative AI whole implementations are going to get plagiarized wholesale.

    Unlike any other work, code comes with its own built in, essentially irremovable “watermark” in the form of security exploits. In several thousands lines of code, there would be enough “watermark” for identification.



  • Having worked in computer graphics myself, it is spot on that this shit is uncontrollable.

    I think the reason is fundamental - if you could control it more you would put it too far from any of the training samples.

    That being said video enhancements along the lines of applying this as a filter to 3d rendered CGI or another video, that could (to some extent) work. I think the perception of realism will fade as it gets more familiar - it is pretty bad at lighting, but in a new way.



  • Still seems terminally AI pilled to me, an iteration or two later. “5 digit multiplication is borderline”, how is that useful?

    I think there’s a combination of it being a pinnacle of billions and billions of dollars, and probably theirs firing people for slightest signs of AI skepticism. There’s another data point, “reasoning math & code” is released as stable by Google without anyone checking if it can do any kind of math.

    edit: imagine that a calculator manufacturer in 1970s is so excited about microprocessors they release an advanced scientific calculator that can’t multiply two 6 digit numbers (while their earlier discrete component model could). Outside the crypto sphere, that sort of insanity is new.






  • there was a directive that if it were asked a math question that you can’t do in your brain or some very similar language it should forward it to the calculator module.

    The craziest thing about leaked prompts is that they reveal the developers of these tools to be complete AI pilled morons. How in the fuck would it know if it can or can’t do it “in its brain” lol.

    edit: and of course, simultaneously, their equally idiotic fanboys go “how stupid of you to expect it to use a calculating tool when it said it used a calculating tool” any time you have some concrete demonstration of it sucking ass, while simultaneously the same kind of people are lauding the genius of system prompts half of which are asking it to meta-reason.


  • Thing is, it has tool integration. Half of the time it uses python to calculate it. If it uses a tool, that means it writes a string that isn’t shown to the user, which runs the tool, and tool results are appended to the stream.

    What is curious is that instead of request for precision causing it to use the tool (or just any request to do math), and then presence of the tool tokens causing it to claim that a tool was used, the requests for precision cause it to claim that a tool was used, directly.

    Also, all of it is highly unnatural texts, so it is either coming from fine tuning or from training data contamination.


  • misinterpreted as deliberate lying by ai doomers.

    I actually disagree. I think they correctly interpret it as deliberate lying, but they misattribute the intent to the LLM rather than to the company making it (and its employees).

    edit: its like you are watching a TV and ads come on you say that a very very flat demon who lives in the TV is lying, because the bargain with the demon is that you get to watch entertaining content in response to having to listen to its lies. It’s fundamentally correct about lying, just not about the very flat demon.


  • Hmm, fair point, it could be training data contamination / model collapse.

    It’s curious that it is a lot better at converting free form requests for accuracy, into assurances that it used a tool, than into actually using a tool.

    And when it uses a tool, it has a bunch of fixed form tokens in the log. It’s a much more difficult language processing task to assure me that it used a tool conditionally on my free form, indirect implication that the result needs to be accurate, than to assure me it used a tool conditionally on actual tool use.

    The human equivalent to this is “pathological lying”, not “bullshitting”. I think a good term for this is “lying sack of shit”, with the “sack of shit” specifying that “lying” makes no claim of any internal motivations or the like.

    edit: also, testing it on 2.5 flash, it is quite curious: https://g.co/gemini/share/ea3f8b67370d . I did that sort of query several times and it follows the same pattern: it doesn’t use a calculator, it assures me the result is accurate, if asked again it uses a calculator, if asked if the numbers are equal it says they are not, if asked which one is correct it picks the last one and argues that the last one actually used a calculator. I hadn’t ever managed to get it to output a correct result and then follow up with an incorrect result.

    edit: If i use the wording of “use an external calculator”, it gives a correct result, and then I can’t get it to produce an incorrect result to see if it just picks the last result as correct, or not.

    I think this is lying without scare quotes, because it is a product of Google putting a lot more effort into trying to exploit Eliza effect to convince you that it is intelligent, than into actually making an useful tool. It, of course, doesn’t have any intent, but Google and its employees do.



  • The other interesting thing is that if you try it a bunch of times, sometimes it uses the calculator and sometimes it does not. It, however, always claims that it used the calculator, unless it didn’t and you tell it that the answer is wrong.

    I think something very fishy is going on, along the lines of them having done empirical research and found that fucking up the numbers and lying about it makes people more likely to believe that gemini is sentient. It is a lot weirder (and a lot more dangerous, if someone used it to calculate things) than “it doesn’t have a calculator” or “poor LLMs cant do math”. It gets a lot of digits correct somehow.

    Frankly this is ridiculous. They have a calculator integrated in the google search. That they don’t have one in their AIs feels deliberate, particularly given that there’s a plenty of LLMs that actually run calculator almost all of the time.

    edit: lying that it used a calculator is rather strange, too. Humans don’t say “code interpreter” or “direct calculator” when asked to multiply two numbers. What the fuck is a “direct calculator”? Why is it talking about “code interpreter” and “direct calculator” conditionally on there being digits (I never saw it say that it used a “code interpreter” when the problem wasn’t mathematical), rather than conditional on there being a [run tool] token outputted earlier?

    The whole thing is utterly ridiculous. Clearly for it to say that it used a “code interpreter” and a “direct calculator” (what ever that is), it had to be fine tuned to say that. Consequently to a bunch of numbers, rather than consequently to a [run tool] thing it uses to run a tool.

    edit: basically, congratulations Google, you have halfway convinced me that an “artificial lying sack of shit” is possible after all. I don’t believe that tortured phrases like “code interpreter” and a “direct calculator” actually came from the internet.

    These assurances - coming from an “AI” - seem like they would make the person asking the question be less likely to double check the answer (and perhaps less likely to click the downvote button), In my book this would qualify them as a lie, even if I consider LLM to not be any more sentient than a sack of shit.