Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random Idea for Future Features: G-LoRA on NNTrainer. #2496

Open
myungjoo opened this issue Mar 5, 2024 · 1 comment
Open

Random Idea for Future Features: G-LoRA on NNTrainer. #2496

myungjoo opened this issue Mar 5, 2024 · 1 comment

Comments

@myungjoo
Copy link
Member

myungjoo commented Mar 5, 2024

Generalized LoRA: LoRA for smaller (in contrast to large models: LLM/LVM/...) models, for their near-real-time adaptation.

Rather than selecting adaptive layers, we may apply generalized LoRA to conventional models with nntrainer, which makes the selecting process "unsupervised".

Anyway, G-LoRA can behave as a MiRA (not low-rank, but mid-rank. reducing to 1/5 ~ 1/10, not 1/1000 ~ 1/10000).

For convolution layers, we can see the whole layer as a 2-dimensional matrix (actually, most frameworks will put them on a single memory buffer as a 1-dimensional vector, which can be easily mapped into 2-d matrix) and apply LoRA as a personalization adaptor to it.

We also need to start thinking about how to package, version, integrate, deploy such adaptors for devices. This incurs more issues to device MLOps, too.

@taos-ci
Copy link

taos-ci commented Mar 5, 2024

:octocat: cibot: Thank you for posting issue #2496. The person in charge will reply soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants