Name: Ege Özgüroğlu
Type: User
Company: Columbia University, Department of Computer Science
Bio: CS PhD Student @ Columbia University, working on Machine Perception
Twitter: EgeOzguroglu
Location: New York, New York
Blog: https://www.cs.columbia.edu/~eo2464/
Ege Özgüroğlu's Projects
Contrastive Language-Image Pretraining
Ege Ozguroglu's github.io Website
Example python project
Marrying Grounding DINO with Segment Anything & Stable Diffusion & Tag2Text & BLIP & Whisper & ChatBot - Automatically Detect , Segment and Generate Anything with Image, Text, and Audio Inputs
The official implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
Project and dataset webpage:
Code for the paper Learning the Predictability of the Future
A python library that combines score editing tools with audio output.
SAM with text prompt
[NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection
The fundamental package for scientific computing with Python.
Replication of the Principal Odor Map paper by Lee et al (2022). The model is implemented such that it integrates with DeepChem
Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Code for the paper "pix2gestalt: Amodal Segmentation by Synthesizing Wholes"
Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/
Image-to-Image Translation in PyTorch
Segment Anything in High Quality [NeurIPS 2023]
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Official implementation of the paper "Segment Everything Everywhere All at Once"
Simulator of vision-based tactile sensors.
Zero-shot 3D Reconstruction from Touch
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV 2023)