research idea pad
here are some ideas i or someone else can check out
- preliminary experiments by a friend of mine shows that vicuna-13b-free is able to better follow the initial prompt better than vicuna-13b 1.0, is this purely because of the ethical restraint? are these models the SAME exact dataset? how were they trained, is free edition a finetune on 1.0, or something else?
- resources required: a gpu capable of running vicuna-13b-free (3060 12gb fits)
- decentralized model distribution projects
- torrent is a good starter to build these on
- never really warmed up to IPFS
- ggml file is unstable, don't submit it to torrents imo
- resources required: write code
- unanswered question: how to index these torrents?
- how to train models in a distributed manner across random gpus through the world? %at=2023-06-22T23:22:37