View on GitHub

Fractal

Flexible Recurrent ArChitecture TrAining Library

Download this project as a .zip file Download this project as a tar.gz file

Introduction

Fractal is a C++ library for online training and testing of unidirectional RNNs. The library is written in C++ and CUDA and runs on NVIDIA GPUs.

Fractal supports very flexible network design by adapting a graph-based layered structure. Complex networks such as LSTM can be represented by connecting basic layers.

The main purpose of Fractal is to train unidirectional RNNs for online applications (e.g. continuously running RNNs). Therefore, the training is basically performed on infinite training streams, not finite sequences. These training streams may be naturally infinite, or can be artificially generated by concatenating training sequences. The objective is to make the resulting RNN run on a infinite input stream.

The graph-based generalization of RNNs and parallelization algorithm is based on [1]. The online CTC algorithm is described in [2].

License

Apache License 2.0

Author

Fractal is written by Kyuyeon Hwang during his Ph.D. at Signal Processing Systems Lab., Seoul National University (advisor: Prof. Wonyong Sung).

References

[1] Kyuyeon Hwang and Wonyong Sung. "Single stream parallelization of generalized LSTM-like RNNs on a GPU." ICASSP 2015.

[2] Kyuyeon Hwang and Wonyong Sung. "Online Sequence Training of Recurrent Neural Networks with Connectionist Temporal Classification." arXiv preprint arXiv:1511.06841 (2015).