I have studied various Christian religions and have liked the teachings of the Mormons (They currently prefer to be called “members of the restored church of Jesus christ”).

I generally try to abide by 3 Ne 11:29-30. I think my favorite scripture is 1 Ne 11:17 as it answers substantially all questions with faith and humility until you have time to properly study it out.

I am prone to talk about what I believe in a manner that I think gives respect all around like the epicurian paradox, the nicene creed, polygamy and judaism, etc.

I feel like I have a few strengths that I would love to share with those curious: my method to pray in a two-way conversation, my affinity for administration, and the “hiding in plain sight” cheats to be in control during persecution, dreams, and restrictive behavioral loops.

  • 2 Posts
  • 64 Comments
Joined 2 years ago
cake
Cake day: December 13th, 2023

help-circle
  • You are right and I have seen some people try some clumsy solutions:

    Have the llm summarize the chat context ( this loses information, but can make the llm appear to have a longer memory)

    Have the llm repeat and update a todo list at the end of every prompt (this keeps it on task as it always has the last response in memory, BUT it can try to do 10 things and fails on step 1 but doesn’t realize jt)

    Have a llm trained with really high quality data then have it judge the randomness of the internet. This is meta cognition by humans using the llm as a tool for itself. It definitely can’t do it by itself without becoming schizophrenic, but it can make some smart models from inconsistent and crappy/dirty datasets.

    Again, you are right and I hate using the syncophantic clockwork-orange llms with no self awareness. I have some hope that they will get better.


  • I’m now curious what you mean by self updating model? For a model to be made, it needs billions of data points for it to make the first. A second can be made with the first judging the quality of the input into the 2nd. Some models already do this sifting in prep for the next model creation.

    I think of it like humans: we have billions of sensory signals each day, and we judge what is important based on genetics, culture, and our chosen interpretation of morality e.g. hedonism considers effort/discomfort. If a llm has a billion sensory signals each day and has application specific hardware like our genetics then would the hardware finally allow you to call it intelligent?

    I am turning into a philosopher in this comment thread! Soo… when is a chair a chair and not a stool?


  • When you say “And there is also a cost in the brain atrophy that these are causing in people who use them regularly. LLMs will make a huge segment of the population mentally and emotionally stunted.” I have difficulty believing it.

    A mit professor did a study where she asked students to use llms to write a paper in one prompt then copy and paste and submit, she then checked and found none of them remembered the next week. Everyone else who wrote down the essay actually remembered and had good brain scans. This is bad evidence because there are other examples of people trying to learn a language who learn faster and with better articulation with llm as a tailoring tutor. There are examples of people feeling isolated at work using llms as a “rubber duck and / or inexperienced intern from a foreign country” and the daily collaborator increases creativity and productivity and mental health of the human. As such, I don’t see conclusive evidence either way.

    Generally I find using llms for brainstorming has led me to think outside of my tunnel vision. I have found using a llm for vibe coding ( I am not a programmer) is a lot of frustration. They are too verbose for their own “time savings”.

    I think social media and short form videos like tick tok do far more harm to attention spans than the rounding error of a llm. I think attention spans are more closely linked with regular handling one’s own boredom.

    Thanks for your engagement, I look forward to conversations like this. Also if you have a source for your claim of brain atrophy, please give it!




  • Well a local model responding to a prompt on less than 20gb of vram (gaming computer) costs less power than booting up any recent AAA high fps game. The primary cost in power is r&d. Training the next model to be “the best” is an Arms race. 90% of power consumption is trying to train the next model in 100 different ways. China was able to build off of chat cpt tech and built a model that has similar abilities and smartness for only 5 million. I think I won’t update my local model until there is actually more abilities in the next one.


  • Talking about rubber duck intelligence, there is a two step “thinking then respond” that recent iterations of llms have started using. It is literally a rubber duck during the thinking phase. I downloaded a local llm with this feature and had it run and the cli did not hide the “thinking” once done. The end product was better quality than if it had tried to spit an answer immediately (I toggled thinking off and it definitely was dumber, so I think you are right for the generation of llms before “thinking”


  • If you have a llm, then you have something to have a conversation with.

    If you give the llm the ability to have memory, then it gets much “smarter” (context window and encoded local knowlege base)

    If you give the llm the ability to offload math problems to an orchestrator that composes math problems then it can give hard numbers based on real math.

    If you give a llm the ability to use the internet to search then it has a way to update its knowledgebase before it answers (seems smarter)

    If you give a llm a orchestrator that can use a credit card on the internet, it can deliver stuff via doordash or whatever. (I don’t trust it).

    Definitely not general intelligence, but good at general conversation and brainstorming and able to get extended modularly while keeping the conversational interface



  • ProbablyBaysean@lemmy.catoScience Memes@mander.xyzInsulin
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    If a transaction occurs for no consideration e.g. a gift, then there is always a chance for a progenitor to sue and claim rights as the transaction “never happened”. This happens when a company acquires another and tries to strip benefits, so the company fires and rehires all employees so that is the consideration. I have personally reviewed hundreds of land sales for 10$ in Texas so there is legally binding consideration exchanged. Functionally it is a legally bulletproof gift.






  • Trying to see a different perspective, a professor fed the contents of his course to a ai (textbook, lesson plans, and recordings of lectures) then had that ai take the cpa exam and it passed with flying colors. If the same professor is “on call” during the lesson but doing research in the other room, and he periodically posts a news article with a few of his knee-jerk responses to how it may affect the profession which adds to the ai “local knowledgebase” and empasizes that as it happens, I am not sure how much is lost. This may give great outcomes with a huge reduction in redundant costs (same lectures with minor tweaks).

    Edit: as this community is “fuck ai”, I thought it allowed discussions about impacts that were more than attacking people. I believe that the scenario is mentioned would have severe costs to quality of education because some students need someone successful to mimic or they are lost (and you cant mimic an ai). However from the other perspective, it may not be all bad for certain professions.



  • According to Maslow hierarchy of needs, fun is achieved when (1) perceived basic needs e.g. food and shelter and meds (2) percieved safety and belonging are met then (3) self esteem e.g. fun can be had.

    Like any model that covers something as diverse and nuanced as human experience and motivation, this is nothing more than an observed starting point.

    Tldr you ain’t having fun because basics are hard either for external or internal reasons




  • I would like to have a way to track my use of FOSS, but i want to retain my privacy. I would be interested in this app. I also would like a different way to allocate so that apps that increase my efficiency so that I don’t spend a long time troubleshooting something get the bigger slice. Perhaps having an optional “impact” survey with varying degrees of granularity (impact survy with only thumbs up and down OR impact survey with only 5 stars OR impact survey with 1-100) honestly this would be really cool if adoption got so high that this became the “patreon” of linux apps (aka having a “like” at the bottom that would remind you of high impact).