seqlearner.GensimWord2Vec(sequences, word_length, window_size, emb_dim, loss, epochs)
Word2Vec is an Embedding Method. This class contains the implementation for Word2Vec Embedding method to apply on a set of sequences. Child class of WordEmbedder.
You can train Embedding layer on vocabulary in order to get embedding weights for each word in vocabulary. compress each in
emb_dim vectors with
freq2vec_maker method returns the embedding weights of the vocabulary. You can access to the vocabulary via
- sequences: Numpy ndarray, list, or DataFrame, sequences of data like protein sequences
- word_length: Positive integer, the length of each word in sequences to be separated from each other.
- window_size: Positive integer, size of window for counting the number of neighbors.
- emb_dim: Positive Integer, number of embedding vector dimensions.
- loss: String, the loss function is going to be used on training phase.
- epochs: Positive integer, number of epochs for training the embedding.
This is a function of Word2Vec class which you can use to embed your vocabulary.
you can train Embedding layer on vocabulary in order to get embedding weights for each word in vocabulary. compress each in
This function accepts no arguments.
Example: make the embedding of protein sequences
import pandas as pd from seqlearner import GensimWord2Vec sequences = pd.read_csv("./protein_sequences.csv", header=None) freq2vec = GensimWord2Vec(sequences, word_length=3, window_size=5, emb_dim=25, loss="mean_squared_error", epochs=250) freq2vec.freq2vec_maker()