목록graph deep learning/#6 Graph Recurrent Networks (4)
yuns
Text Encoding을 향상시키기 위하여 S-LSTM이 제안되었음 text를 graph로 변환한 뒤, representation을 학습하여 Graph-LSTM을 용이하게 함. S-LSTM model regards each word as a node in the graph and it adds a supernode. For each node, the word node could aggregate information from its adjacent words as well as the supernode. The supernode could aggregate information from all of the word nodes as well as itself. NLP Hidden states of wor..
Tree-LSTM의 두 가지 method는 graph에 적용이 쉽게 가능하다. Graph-structured LSTM[SIGKDD18]은 graph에 적용된 N-ary Tree-LSTM의 예시 중 하나 graph의 각 노드에는 최대 2개의 incoming edge가 있기 때문에 단순화된 버전 Variant of Graph LSTM[SIGKDD17] 은 relation extraction task에 기반으로 한 방법 graph와 Tree의 가장 큰 차이점은 edge에 label이 존재하는 사실 (수식 생략) Graph LSTM network - 의미론적 구문 분석 작업 처리를 위하여 제안 uses the confidence-driven scheme to adaptively select the startin..
LSTMs are used in a similar way as GRU through the propagation process based on a tree or a graph. Two extensions to the basic LSTM architecture The Child-Sum Tree-LSTM N-ary Tree-LSTM Each Tree-LSTM unit contains input and output gates $i_v$ and $o_v$, a memory cell $c_v$, and hidden state $h_v$ The Tree-LSTM unit abandons the single forget gate but uses a forget gate $f_{vk}$ for each child k,..
Introduction The trend to use the mechanism from RNNs like GRU ror LSTM in the propagation step to diminish the restictions from the vanilla GNN model and improve the effectiveness of the long-term information propagation across the graph. 6.1 Gated Graph Neural Networks GGNN which uses the Gate Recurrent Units (GRU) in the propagation step. unrolls the RNN for a fixed number of T steps and back..