본문 바로가기

Papers

(12)
[논문 리뷰] How to Train Your Energy-Based Models Author: Y Song Publication: https://arxiv.org/abs/2101.03288 Diffusion model의 입문용 논문으로 추천돼있어 아는 내용 갈무리 겸 읽어보게 되었다. 각 섹션의 개요만 간단하게 정리해본다. Energy-Based Models (EBMs) Energy function: Unnormalized negative log-probability Density estimation은 nonlinear regression 문제로 귀납된다. Probability density: $p_\theta(x) = {{-E_\theta(x)} \over {Z_\theta}}$ E는 parameter $\theta$에 대한 nonlinear regression 함수이다. Norm..
[논문 리뷰] Protein Design with Guided Discrete Diffusion (NIPS 2023) 이 글에서 다루는 논문: Protein Design with Guided Discrete Diffusion (2023.05, N Fray et al.) https://arxiv.org/abs/2305.20009 비슷한 시기에 나온 유사한 논문: Protein Discovery with Discrete Walk-Jump Sampling (2023.06, N Gruver et al.) https://arxiv.org/abs/2306.12360 참고하면 좋을 논문: Plug-and-Play Language Models (PPLM) https://arxiv.org/abs/1912.02164 Code: https://github.com/ngruver/NOS Introduction Protein sequence d..
[논문 리뷰] Text Classification Using Label Names Only: A Language Model Self-Training Approach (LOTClass, EMNLP 2020) Written by Yu Meng et al. Paper Link: https://arxiv.org/pdf/2010.07245.pdf Summary Motivation: To suggest a weakly-supervised text classification model trained on only unlabeled corpus without external sources. Contribution: 1) no need for external sources like Wikipedia. 2) using indicative words for category and predicting the masked ones. 3) comparable text classification performance compared..
[짧은 논문 리뷰] Motivation of ConVIRT Title: Contrastive Learning of Medical Visual Representations from Paired Images and Text Venue: arxiv Authors: Yuhao Zhang et al. Date: 2 Oct 2020 Venue: Machine Learning for Healthcare (MLHC) 2022 ConVIRT는 CLIP의 base architecture로 언급되어 읽어 보았습니다. Text-image pair: Medical image (X-ray, CT) 등에 짝이 되는 textual report가 있다고 가정합니다. Pre-training에 사용한 데이터셋 MIMIC-CXR : chest (흉부) radiograph paired with te..
[논문 리뷰] Dimensionality Reduction by Learning an Invariant Mapping (CVPR 2006) Author: Raia Hadsell et al. Paper Link: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf Concept The Beginning of Contrastive Loss Notation Pairs of samples $\vec X_1, \vec X_2$ Label $Y$ Similar pairs $Y = 0$ Dissimilar pairs $Y = 1$ Energy (L2 norm, Euclidean distance) $D_W$ Loss $L_S, L_D$ Siamese Network $G_W$ Networks sharing parameter $W$ Using $G_W$ when calculating L2 no..
[NLP 근본 논문 리뷰 #2] Distant Supervision for Relation Extraction without Labeled Data (ACL 2009) Written by Mike Mintz et al. Paper Link: https://aclanthology.org/P09-1113.pdf Summary Motivation To use Freebase to give us a training set of relations and entity pairs that participate in those relations Contribution Avoid overfitting and domain-dependence by using large-scale knowledge base Limitation Wrongly labeled relations False negatives Assumption on large data (knowledge base such as F..
[코드 리뷰] SimCLR Code Review Created: October 10, 2021 5:20 PM This is the report from the class project for code and review reproduction. We utilize the PyTorch version of SimCLR with the most stars. All source codes and rights belong to sthalles/SimCLR and Google Research. Contents Introduction Run SimCLR Code Code Architecture Major Components (§2.1) SimCLR Algorithm to Code Feature Evaluation (§2.3, §4.2) Q&A Reference ..
[NLP 근본 논문 리뷰 #1] GloVe: Global Vectors for Word Representation (EMNLP 2014) by Jeffrey Pennington Paper : https://nlp.stanford.edu/pubs/glove.pdf Motivation Word2vec은 word vectors의 representation을 배우기 위한 프레임워크입니다. Context가 주어졌을 때 word vectors의 similarity를 계산하자는 것이 주요한 아이디어입니다. Word2vec에선 skip-gram과 continuous bag-of-word (CBOW)의 두 가지 모델을 제시하고 있습니다. 이 모델의 objective function은 context가 주어졌을 때 word를 예측하는 것입니다. 이 방법론은 직접적으로 co-occurence statistics를 계산할 수 없다는 한계로 인해, corpus 내..

반응형