tatk.e2e.sequicity package

Submodules

tatk.e2e.sequicity.config module

tatk.e2e.sequicity.metric module

class tatk.e2e.sequicity.metric.BLEUScorer

Bases: object

__init__()

Initialize self. See help(type(self)) for accurate signature.

score(parallel_corpus)
class tatk.e2e.sequicity.metric.CamRestEvaluator(result_path)

Bases: tatk.e2e.sequicity.metric.GenericEvaluator

__init__(result_path)

Initialize self. See help(type(self)) for accurate signature.

get_entities(entity_data)
match_metric(data, sub='match', raw_data=None)
run_metrics()
success_f1_metric(data, sub='successf1')
class tatk.e2e.sequicity.metric.GenericEvaluator(result_path)

Bases: object

__init__(result_path)

Initialize self. See help(type(self)) for accurate signature.

bleu_metric(data, type='bleu')
clean(s)
dump()
pack_dial(data)
read_result_data()
run_metrics()
class tatk.e2e.sequicity.metric.KvretEvaluator(result_path)

Bases: tatk.e2e.sequicity.metric.GenericEvaluator

__init__(result_path)

Initialize self. See help(type(self)) for accurate signature.

clean_by_intent(s, i)
constraint_same(truth_cons, gen_cons)
match_rate_metric(data, sub='match', bspans='./data/kvret/test.bspan.pkl')
run_metrics()
success_f1_metric(data, sub='successf1')
class tatk.e2e.sequicity.metric.MultiWozEvaluator(result_path)

Bases: tatk.e2e.sequicity.metric.GenericEvaluator

__init__(result_path)

Initialize self. See help(type(self)) for accurate signature.

get_entities(entity_data)
match_metric(data, sub='match', raw_data=None)
run_metrics()
success_f1_metric(data, sub='successf1')
tatk.e2e.sequicity.metric.metric_handler()
tatk.e2e.sequicity.metric.report(func)
tatk.e2e.sequicity.metric.setsim(a, b)
tatk.e2e.sequicity.metric.setsub(a, b)
tatk.e2e.sequicity.metric.similar(a, b)

tatk.e2e.sequicity.model module

class tatk.e2e.sequicity.model.Model(dataset)

Bases: object

__init__(dataset)

Initialize self. See help(type(self)) for accurate signature.

count_params()
eval(data='test')
freeze_module(module)
interact()
load_glove_embedding(freeze=False)
load_model(path=None)
predict(usr, kw_ret)
reinforce_tune()
save_model(epoch, path=None)
train()
training_adjust(epoch)
unfreeze_module(module)
validate(data='dev')
tatk.e2e.sequicity.model.main(arg_mode=None, arg_model=None)

tatk.e2e.sequicity.reader module

class tatk.e2e.sequicity.reader.CamRest676Reader

Bases: tatk.e2e.sequicity.reader._ReaderBase

__init__()

Initialize self. See help(type(self)) for accurate signature.

class tatk.e2e.sequicity.reader.KvretReader

Bases: tatk.e2e.sequicity.reader._ReaderBase

__init__()

Initialize self. See help(type(self)) for accurate signature.

db_degree(constraints, items)
db_degree_handler(z_samples, idx=None, *args, **kwargs)

returns degree of database searching and it may be used to control further decoding. One hot vector, indicating the number of entries found: [0, 1, 2, 3, 4, >=5] :param z_samples: nested list of B * [T] :return: an one-hot control numpy control vector

class tatk.e2e.sequicity.reader.MultiWozReader

Bases: tatk.e2e.sequicity.reader._ReaderBase

__init__()

Initialize self. See help(type(self)) for accurate signature.

wrap_result(turn_batch, gen_m, gen_z, eos_syntax=None, prev_z=None)

wrap generated results :param gen_z: :param gen_m: :param turn_batch: dict of [i_1,i_2,…,i_b] with keys :return:

tatk.e2e.sequicity.reader.clean_replace(s, r, t, forward=True, backward=False)
tatk.e2e.sequicity.reader.get_glove_matrix(vocab, initial_embedding_np)

return a glove embedding matrix :param self: :param glove_file: :param initial_embedding_np: :return: np array of [V,E]

tatk.e2e.sequicity.reader.pad_sequences(sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.0)

tatk.e2e.sequicity.tsd_net module

class tatk.e2e.sequicity.tsd_net.Attn(hidden_size)

Bases: torch.nn.modules.module.Module

__init__(hidden_size)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(hidden, encoder_outputs, normalize=True)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

score(hidden, encoder_outputs)
class tatk.e2e.sequicity.tsd_net.BSpanDecoder(embed_size, hidden_size, vocab_size, dropout_rate)

Bases: torch.nn.modules.module.Module

__init__(embed_size, hidden_size, vocab_size, dropout_rate)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(u_enc_out, z_tm1, last_hidden, u_input_np, pv_z_enc_out, prev_z_input_np, u_emb, pv_z_emb)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class tatk.e2e.sequicity.tsd_net.ResponseDecoder(embed_size, hidden_size, vocab_size, degree_size, dropout_rate, gru, proj, emb, vocab)

Bases: torch.nn.modules.module.Module

__init__(embed_size, hidden_size, vocab_size, degree_size, dropout_rate, gru, proj, emb, vocab)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(z_enc_out, u_enc_out, u_input_np, m_t_input, degree_input, last_hidden, z_input_np)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_sparse_selective_input(x_input_np)
class tatk.e2e.sequicity.tsd_net.SimpleDynamicEncoder(input_size, embed_size, hidden_size, n_layers, dropout)

Bases: torch.nn.modules.module.Module

__init__(input_size, embed_size, hidden_size, n_layers, dropout)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input_seqs, input_lens, hidden=None)

forward procedure. No need for inputs to be sorted :param input_seqs: Variable of [T,B] :param hidden: :param input_lens: numpy array of len for each input sequence :return:

class tatk.e2e.sequicity.tsd_net.TSD(embed_size, hidden_size, vocab_size, degree_size, layer_num, dropout_rate, z_length, max_ts, beam_search=False, teacher_force=100, **kwargs)

Bases: torch.nn.modules.module.Module

__init__(embed_size, hidden_size, vocab_size, degree_size, layer_num, dropout_rate, z_length, max_ts, beam_search=False, teacher_force=100, **kwargs)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

beam_search_decode(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
beam_search_decode_single(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
bspan_decoder(u_enc_out, z_tm1, last_hidden, u_input_np, pv_z_enc_out, prev_z_input_np, u_emb, pv_z_emb)
finish_episode(log_probas, saved_rewards)
forward(u_input, u_input_np, m_input, m_input_np, z_input, u_len, m_len, turn_states, degree_input, mode, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

forward_turn(u_input, u_len, turn_states, mode, degree_input, u_input_np, m_input_np=None, m_input=None, m_len=None, z_input=None, **kwargs)

compute required outputs for a single dialogue turn. Turn state{Dict} will be updated in each call. :param u_input_np: :param m_input_np: :param u_len: :param turn_states: :param is_train: :param u_input: [T,B] :param m_input: [T,B] :param z_input: [T,B] :return:

get_req_slots(bspan_index)
greedy_decode(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
reward(m_tm1, decoded, bspan_index)

The setting of the reward function is heuristic. It can be better optimized. :param m_tm1: :param decoded: :param bspan_index: :return:

sampling_decode(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
sampling_decode_single(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
self_adjust(epoch)
supervised_loss(pz_proba, pm_dec_proba, z_input, m_input)
tatk.e2e.sequicity.tsd_net.cuda_(var)
tatk.e2e.sequicity.tsd_net.get_sparse_input_aug(x_input_np)

sparse input of :param x_input_np: [T,B] :return: Numpy array: [B,T,aug_V]

tatk.e2e.sequicity.tsd_net.init_gru(gru)
tatk.e2e.sequicity.tsd_net.nan(v)
tatk.e2e.sequicity.tsd_net.toss_(p)