Yarn

Recent twts in reply to #7ynlnva

Home | Tabby This is actually pretty cool and useful. Just tried this on my Mac locally of course and it seems to have quite good utility. What would be interesting for me would be to train it on my code and many projects 😅

⤋ Read More

The amazing thing I find by everything I ever try that isn’t OpenAI’s ChatGPT, is that basically it’s all pretty useless at small scale. What do I mean by this? Well, in trying a few different models and tools that you can run locally (not connected to any “cloud” service), the results are pretty underwhelming. For example with Tabby connected to VSCode (locally), you get “okay” results for pretty trivial shit. The moment you try to do anything interesting at all, you either get completely garbage code or copyright headers as suggestions.

I just don’t have time or energy to waste hours of a weekend on basically what amounts to statistical probabilistic models trained on complete garbage that generates or predicts in pretty awful ways.

I event spent a few hours today learning about a statistical model used for natural language processing (NLP) called n-gram(s) or ngrams, and ended up with pretty abysmal results. It all depends on the quality of the dataset, and then it can only predict what it has seen.

⤋ Read More

@xuu Yeah that’s the problem I have really. Unless I can easily train the LLM on my own dataset(s) so I can autocomplete things I’ve done before and repeat the same/similar patterns, this whole this is just not worth it for me, because it’s basically just “dumb”.

⤋ Read More

@prologic@twtxt.net The hackathon project that I did recently used openai and embedded the response info into the prompt. So basically i would search for the top 3 most relevant search results to feed into the prompt and the AI would summarize to answer their question.

⤋ Read More

Participate

Login to join in on this yarn.