I have a db with a lot of data that all need precise summarisation, I would do it myself if it wasn’t 20 thousand fields long

It is about 300k tokens, and Gemini 2.5 struggles missing points and making up facts

Separating them into smaller sections is not an option, because even when seperated they can take up 30k tokens, and the info that needs summarisation may span 100k token ranges

I learnt that fine tuning may have better results than general purpose models, and now I’m wondering if there is anything high token count for summarisation.

Any help would be appreciated, even if its to suggest another general purpose model that has better coherency

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 days ago

    From my personal experience, I’d say generative AI isn’t the best tool for summarization. It also frequently misses the point when I try. Or makes up additional facts which haven’t been in the input text. (Or starts going on (wrong) tangents despite the task being to keep it short and concise.) And I’d say all(?) models do that. Even the ones that are supposed to be big and clever.

    Edit: Lots of people use ChatGPT etc for summarization, though. So I really don’t know who’s right here. Maybe my standards are too high, but what I’ve read as output from small to big models like ChatGPT wasn’t great.

    There are other approaches in NLP. For example extractive summarization like the BART model from Facebook. That’s precise. Some Lemmy bot uses LsaSummarizer, but I don’t really know how that works. Or maybe you can re-think what you’re trying to do and use RAG instead of summarization.

  • SmokeyDope@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 days ago

    As other commenter said your workflow requires more than what LLMs are currently capable of.

    Summarization capability in LLMs is an equation of LLMs capacity for coherence over long conversational scaling operated on by the LLMs ability to navigate and distill internal structural mappings of conceptual & contextual archetype patterns as discrete objects across a continuous ambiguity sheaf.

    This technical jargon that boils down to the idea that an llms summarization capability depends on its parameter size and enough vram for context lengths. Higher parameter and less quantized models maintaining more coherence over long conversations/datasets.

    While enterprise llms are able to get up to 128k tokens while maintaining some level of coherence, the local models of medium quantization can handle 16-32k reliably. Theoretically 70b could maybe handle around 64k tokens but even thats stretching it.

    Then comes the problem of transformer attention. You can’t just put a whole books worth of text into an LLMs input and expect it to inspect any part in real detail. For best results you have to chunk it section by section, chapter by chapter.

    So local llms may not be what you’re looking for. If you are willing to go enterprise then Claude sonnet and deepseek R1 might be good especially if you set up a API interface.

    • Omega@discuss.onlineOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I have attempted those solutions, R1 was best, even then I would have to chunk it, it may be possible to feed it extensive summary of previous information for better summaries (maybe)

      Gemini is good until 200k. Scout is good until 100k. R1 was always good, till context limit.

      • SmokeyDope@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        You can try to use VSCode + roo to intelligently chunk it autonomously. Get a API key from your llm provider of choice, put your data into a text file, Edit the roo agent personalites thats set to coding by default. Instead add and select a custom summarizer persona, for roo to use then tell it to summarize the text file.

      • pepperfree@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        So something like

        Previously the text talk about [last summary]
        [The instruction prompt]...
        [Current chunk/paragraphs]