Topic: parameter-efficient-fine-tuning Goto Github
Some thing interesting about parameter-efficient-fine-tuning
Some thing interesting about parameter-efficient-fine-tuning
parameter-efficient-fine-tuning,Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
User: alinourian
parameter-efficient-fine-tuning,Parameter Efficient Fine-Tuning for CLIP
User: andy-lzh
parameter-efficient-fine-tuning,Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
User: architkaila
parameter-efficient-fine-tuning,[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
User: auniquesun
Home Page: https://arxiv.org/abs/2402.15823
parameter-efficient-fine-tuning,A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
User: chongjie-si
parameter-efficient-fine-tuning,
User: cityuhkai
parameter-efficient-fine-tuning,A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App
User: fork123aniket
parameter-efficient-fine-tuning,Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"
User: georgevern
parameter-efficient-fine-tuning,This repository contain a project which goal is to find new parameter efficient fine tuning framework in order to improve performance of Deep Artificial Neural Network onto "out of distribution" data (OOD). In this specific case you can find Multi-task Learning problem.
User: giuseppedipoce
parameter-efficient-fine-tuning,CorDA: Context-Oriented Decomposition Adaptation of Large Language Models
User: iboing
Home Page: https://arxiv.org/pdf/2406.05223
parameter-efficient-fine-tuning,Fine-Tuned LLM-Based FAQ Generation for University Admissions: A project involving the fine-tuning of state-of-the-art language models, including LLaMA-3 8b, LLaMA-2 7b, Mistral 7b, T5, and BART, leveraging QLoRA PEFT.
User: kayvanshah1
parameter-efficient-fine-tuning,[SIGIR'24] The official implementation code of MOELoRA.
User: liuqidong07
Home Page: https://arxiv.org/abs/2310.18339
parameter-efficient-fine-tuning,[ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
Organization: miccunifi
Home Page: https://arxiv.org/abs/2407.03056
parameter-efficient-fine-tuning,[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Organization: nvlabs
Home Page: https://arxiv.org/abs/2402.09353
parameter-efficient-fine-tuning,The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
User: paranioar
parameter-efficient-fine-tuning,[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"
User: paranioar
parameter-efficient-fine-tuning,[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
User: paranioar
parameter-efficient-fine-tuning,[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
Organization: purduedigitaltwin
Home Page: https://purduedigitaltwin.github.io/MACP/
parameter-efficient-fine-tuning,My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
User: qiqinyi
parameter-efficient-fine-tuning,A framework to optimize Parameter-Efficient Fine-Tuning for Fairness in Medical Image Analysis
User: raman1121
Home Page: https://arxiv.org/abs/2310.05055
parameter-efficient-fine-tuning,This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
User: rochitasundar
Home Page: https://www.coursera.org/account/accomplishments/certificate/8JAYVEUAQF56
parameter-efficient-fine-tuning,Collection of awesome parameter-efficient fine-tuning resources.
User: synbol
parameter-efficient-fine-tuning,This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".
User: umbertocappellazzo
Home Page: https://arxiv.org/abs/2312.03694
parameter-efficient-fine-tuning,[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
User: zhengxiangshi
Home Page: http://arxiv.org/abs/2309.05173
parameter-efficient-fine-tuning,[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.
Organization: ziplab
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.