Topic: pretraining Goto Github
Some thing interesting about pretraining
Some thing interesting about pretraining
pretraining,Official implementation of ICCV 2023 paper "3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment"
User: 3d-vista
Home Page: https://3d-vista.github.io
pretraining,Benchmarking framework for protein representation learning. Includes a large number of pre-training and downstream task datasets, models and training/task utilities. (ICLR 2024)
User: a-r-j
Home Page: https://proteins.sh/
pretraining,OpenAI GPT2 pre-training and sequence prediction implementation in Tensorflow 2.0
User: akanyaani
pretraining,Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
Organization: alibaba-miil
pretraining,BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
Organization: amazon-science
pretraining,MixGen: A New Multi-Modal Data Augmentation
Organization: amazon-science
pretraining,Paper List for Recommend-system PreTrained Models
User: archersama
pretraining,Official codes: Self-Supervised Learning by Estimating Twin Class Distribution
Organization: bytedance
Home Page: https://arxiv.org/abs/2110.07402
pretraining,Papers about pretraining and self-supervised learning on Graph Neural Networks (GNN).
User: chandlerbang
pretraining,Pre-training Molecular Graph Representation with 3D Geometry, ICLR'22 (https://openreview.net/forum?id=xQUe1pOKPam)
User: chao1224
Home Page: https://chao1224.github.io/GraphMVP
pretraining,Multi-modal Molecule Structure-text Model for Text-based Editing and Retrieval, Nat Mach Intell 2023 (https://www.nature.com/articles/s42256-023-00759-6)
User: chao1224
Home Page: https://chao1224.github.io/MoleculeSTM
pretraining,AAAI-20 paper: Cross-Lingual Natural Language Generation via Pre-Training
User: czwin32768
Home Page: https://arxiv.org/abs/1909.10481
pretraining,Official Repository for the Uni-Mol Series Methods
Organization: dptech-corp
pretraining,Universal User Representation Pre-training for Cross-domain Recommendation and User Profiling
User: fajieyuan
pretraining,[EMNLP'21] Visual News: Benchmark and Challenges in News Image Captioning
User: fuxiaoliu
pretraining,Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.
User: guolinke
pretraining,PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)
User: j-min
Home Page: https://arxiv.org/abs/2102.02779
pretraining,[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
User: keyu-tian
Home Page: https://arxiv.org/abs/2301.03580
pretraining,从词表到微调这就是你所需的一切
User: kingtle
pretraining,Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"
User: linjieli222
Home Page: https://arxiv.org/abs/2005.00200
pretraining,[ECCV 2022] Learning to Drive by Watching YouTube Videos: Action-Conditioned Contrastive Policy Pretraining
Organization: metadriverse
pretraining,[NeurIPS 2022] DRAGON 🐲: Deep Bidirectional Language-Knowledge Graph Pretraining
User: michiyasunaga
pretraining,[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links
User: michiyasunaga
Home Page: https://arxiv.org/abs/2203.15827
pretraining,End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
Organization: microsoft
pretraining,[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Organization: microsoft
pretraining,An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"
Organization: microsoft
Home Page: https://arxiv.org/abs/2002.06353
pretraining,Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Organization: ofa-sys
pretraining,PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
Organization: opengvlab
Home Page: https://arxiv.org/abs/2310.08586
pretraining,飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
Organization: paddlepaddle
pretraining,Recent Advances in Vision and Language Pre-training (VLP)
User: phellonchen
pretraining,PITI: Pretraining is All You Need for Image-to-Image Translation
User: piti-synthesis
pretraining,【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
Organization: pku-yuangroup
Home Page: https://arxiv.org/abs/2310.01852
pretraining, EntitySeg Toolbox: Towards Open-World and High-Quality Image Segmentation
User: qqlu
pretraining,[NeurIPS2022] Egocentric Video-Language Pretraining
Organization: showlab
Home Page: https://arxiv.org/pdf/2206.01670.pdf
pretraining,[ICCV2023] UniVTG: Towards Unified Video-Language Temporal Grounding
Organization: showlab
Home Page: https://arxiv.org/abs/2307.16715
pretraining,Code accompanying the paper Pretraining Language Models with Human Preferences
User: tomekkorbak
Home Page: https://arxiv.org/abs/2302.08582
pretraining,code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022
Organization: uta-smile
pretraining,Saprot: Protein Language Model with Structural Alphabet
Organization: westlake-repl
pretraining,Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
User: wvangansbeke
Home Page: https://proceedings.neurips.cc/paper/2021/hash/8757150decbd89b0f5442ca3db4d0e0e-Abstract.html
pretraining,A Chinese Open-Domain Dialogue System
Organization: x-plug
Home Page: https://www.modelscope.cn/studios/damo/role_play_chat/summary
pretraining,mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)
Organization: x-plug
Home Page: https://arxiv.org/abs/2205.12005
pretraining,mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
Organization: x-plug
Home Page: https://www.modelscope.cn/studios/damo/mPLUG-Owl
pretraining,X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
User: yehli
pretraining,Recent Advances in Vision and Language PreTrained Models (VL-PTMs)
User: yuewang-cuhk
pretraining,[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
User: yxuansu
Home Page: https://arxiv.org/abs/2111.04198
pretraining,Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER adversarial training part
User: zhegan27
Home Page: https://arxiv.org/pdf/2006.06195.pdf
pretraining,Collection of training data management explorations for large language models
User: zigew
Home Page: https://arxiv.org/abs/2312.01700
pretraining,PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)
User: zinengtang
pretraining,[ICLR 2022] OntoProtein: Protein Pretraining With Gene Ontology Embedding
Organization: zjunlp
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.