I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

  • skrufimonki@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    While you can run an llm on an “old” laptop with an Nvidia GC it will likely be really slow. Like several minutes to much much longer slow. Huggingface.co is a good place to start and has a ton of different LLMs to choose from that range from small enough to run on your hardware to ones that won’t.

    As you are a teacher you know that research is going to be vital to your understanding and implementing this project. There is a plethora of information out there. There will not be a single person’s answer that will work perfectly for your wants and your hardware.

    When you have figured out your plan and then run into issues that’s a good point to ask questions with more information about your situation.

    I say this cause I just went through this. Not to be an ass.

  • OpticalMoose@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Probably better to ask on [email protected]. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

    The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

    Short answer - yes, you can do it. It’s just a matter of how much RAM you have available and how long you’re willing to wait for an answer.