• Truscape@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    3 days ago

    I remember those days - using my 3080ti to test the latest Pygmalion model and reporting feedback for how understandable the outputs were to the developers…

    I didn’t use my self-host for horny though - I tried being a DM for a DND session with fictional characters :)

    • Landless2029@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      This is one reason I wanted to get into LLMs DND stuff. I’ve tried virtual DMs and they suck due to hallucinations.

      Never tried the opposite. Would be interesting to DM and have the LLM be 4 players.

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        One thing I learned is that you can’t rely on an inventory system (there’s no persistence), so you’re basically always running a oneshot campaign no matter what. After the novelty wore off, I found that quite boring and just played DND on tabletop sim or roll20 with discord friends instead.

        • Landless2029@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I was wondering if I could setup a external yaml file for the pipeline to take notes and reload to help persistence issues.

          • Truscape@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I’ve never tried something like that, but even if you do remind the party what they have equipped or in reserve, they’ll just make items up. It’s frustrating.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      Yeah I was being a bit facetious. There really was a lot of roleplaying and other neat things (like dungeon masters) that motivated people.

      That, and there are some even earlier, more primitive (and less horny) RP models, like Janeway and some others named after ST captains.

      By the way, there are some pretty awesome dungeon master finetunes that would fit on a 3080 TI these days.

      • Killer_Tree@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Can you name-drop any recommended DM fine tunes? Anytime I try to do model research I end up down rabbit holes and very confused…

        Appreciated!

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          2 days ago

          Oh, there are so many… Yeah, it’s a rabbit hole.

          For now, check out:

          https://huggingface.co/LatitudeGames/Harbinger-24B (and literally anything from Latitude Games, who explicitly specialize in dungeon master models for their site).

          https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.1-24b

          https://huggingface.co/Gryphe/Codex-24B-Small-3.2

          24Bs are very tight on your card (but so smart they’re worth it), so you will want ~3.6bpw (10 GB-ish) exl3 quantizations to minimize the quantization loss and keep them fast. They’re easy to make yourself if you know a little command line and have decent internet; I can walk you through it.

          Or I can just quantize these three models just for you, overnight, if you wish. Maybe check how much VRAM your desktop takes up at idle so I can size them right, and let me know.

          • Killer_Tree@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Thank you very much! These all looks very interesting and I’m excited to try them out.

            I’ve never quantized a model before (I usually find pre-quantized versions) but I would love to learn how. If you can provide the command-line details for doing so, or point me towards a good resource, that would rock!

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              2 days ago

              So first of all, you run exl3s via tabbyAPI + your frontend of choice: https://github.com/theroyallab/tabbyAPI

              Check out their docs. Specific settings I’d recommend are like 16K context and “6,5” cache quantization. For example, these are some changed lines plucked from my own config files:

                # Backend to use for the model (default: exllamav2)
                # Options: exllamav2, exllamav3
                backend: exllamav3
              
                # Max sequence length (default: Empty).
                # Fetched from the model's base sequence length in config.json by default.
                max_seq_len: 16384
              
                # Enable different cache modes for VRAM savings (default: FP16).
                # Possible values: 'FP16', 'Q8', 'Q6', 'Q4'.
                # For exllamav3, specify the pair k_bits,v_bits where k_bits and v_bits are integers from 2-8 (i.e. 8,8).
                cache_mode: 6,5
              
                # Chunk size for prompt ingestion (default: 2048).
                # A lower value reduces VRAM usage but decreases ingestion speed.
                # NOTE: Effects vary depending on the model.
                # An ideal value is between 512 and 4096.
                chunk_size: 512
              
              

              Now, to make a quantized model, you just download/install the exllamav3 repo (which you install for tabbyAPI anyway) and follow its documentation: https://github.com/turboderp-org/exllamav3/blob/master/doc/convert.md

              An example command would be: `python convert.py -i “/Path/to/model” -o “/output/directory” --work_dir “temporary/work/directory” -b 3.2 -hb 6

              You probably want, like, 3.2 bits per word (the ‘-b’ flag).


              …But that’s not how I would quantize it. If I were you, since the ~3bpw range is so sensitive to quantization, I’d use a custom per-layer quantization scheme described here: https://old.reddit.com/r/LocalLLaMA/comments/1mqwt76/optimizing_exl3_quants_by_mixing_bitrates_in/

              The process is like this: you either make or download 3bpw and 4bpw variants of the model you desire, like say, this one for 4bpw:

              https://huggingface.co/MetaphoricalCode/Harbinger-24B-exl3-4bpw-hb6

              And make a 3bpw yourself (since I don’t see one available for Harbinger 24B).

              Then, you “mix” the two models you’ve made with a command like this:

              python util/recompile.py -or overrides.yml -o "/output/folder" -i "/path/to/your/3bpw-exl3-quantization

              And the overrides.yml file looks like:

              sources:
                - id: 4
                  model_dir: /path/to/4bpw-exl3-quantization
              
              overrides:
                #   Attention & router tensors – cheap, big gain on MoE models
                - key: "*.self_attn.q_proj*"
                  source: 4          # +1 bpw
                - key: "*.self_attn.k_proj*"
                  source: 4          # +1 bpw
                - key: "*.self_attn.v_proj*"
                  source: 4          # +1 bpw
                - key: "*.self_attn.o_proj*"
                  source: 4          # +1 bpw
                # - key: "*.mlp.down_proj*"
                #   source: 4          # +1 bpw
              
                #  This would force the whole first layer to 4bpw
                # - key: "model.layers.0.*"
                #   source: 4
              

              What this example overrides.yml does is force the more sensitive attention layers to use 4bpw quantization (plucking them from the 4bpw quantization you downloaded), and everything else (namely the mlp layers) to use 3bpw. This should end up around ~3.2bpw or so. You can make it larger by uncommenting the mlp down layer (which is the next most sensitive layer), or make it smaller by commenting out the q_proj layer (with the kv layers being the most sensitive, and relatively tiny).

              This seems convoluted, yep. But it has advantages:

              • It targets the ‘sensitive’ layers more accurately, whereas exllamav3 more randomly changes the quantization of layers to hit a specified bpw target (as it can only use integer quantizations).

              • It can be faster. If you can find 3bpw and 4bpw exl3s of the model you want to try, you can just download them and recombine them: no actual quantization needed, and no need to download the 50GB raw weights. convert.py takes a few hours to run, while util/recompile.py takes seconds.


              …And why go to all this hassle, you ask?

              Because exl3s let you stuff in a much better model, with less loss, than anything you’d find on ollama:

              img

              https://github.com/turboderp-org/exllamav3/blob/d8167b0cf4491baeae7705c0dfec7f131f02aad4/doc/exl3.md

              You can cram a 24 billion parameter model into the 11GB free you have, with minimal loss and no CPU offloading, wheras with ollama (and their unoptimized GGUFs/context qauntization), you’d either need a Q4/Q5 of a much dumber 12B model, or a Q3/Q2 of a 24B that will spit out jibberish, or make the model glacially slow by offloading half of it to system RAM.

              And it better takes advantage of your 3080 TI’s architecture.


              There are other ways to get really good quantization (like with ik_llama.cpp), but for dense models, I love exllamav3.

              Also, this whole field moves fast. Exllamav3 is like 5 months old, and this ‘manual’ quantization scheme was only tested a few days ago.

              • Killer_Tree@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Once again, thank you so much for sharing your knowledge! It looks like I have some weekend projects to look forward to.

                • brucethemoose@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 days ago

                  Yep! Just PM/reply or something for any help/requests, maybe more than once (as sometimes I miss them, and sometimes Lemmy doesn’t send notifications for replies).

          • gsdsam@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Did you play a specific system? I’ve been curious about playing cyberpunk RED with AI for a bit, most online options seem to be 5e based so I’m curious if you can teach these other systems and settings, that would be awesome.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 days ago

              Did you play a specific system?

              Honestly I don’t use them for much RP these days, mostly novel-style writing instead :P.

              most online options

              ‘Online’ systems are probably taking bone stock LLMs and using 5e rules banged into the system prompt anyway. You could do the same thing with with a local UI (like Kobold, Open Web UI, mikupad. Take your pick.)

              I’m curious if you can teach these other systems and settings, that would be awesome.

              Theoretically? You could collect some text from completed Cyberpunk RED games and finetune a model.

              Or maybe use constrained sampling to help it format certain answers, which would be much easier.

              But honestly I would just try some ‘strong’ models and see if they follow the rules you paste into the system prompt, unless you want to dump a ton of time (and some cash) down the finetuning rabbit hole.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          Oh, also, I can just host any of these on the AI Horde for a bit if you want to try them out, via Kobolt Light or AgnAIstic web apps. Again, just lemme know.