convlab2.e2e.sequicity package

Submodules

convlab2.e2e.sequicity.config module

convlab2.e2e.sequicity.metric module

class convlab2.e2e.sequicity.metric.BLEUScorer

Bases: object

score(parallel_corpus)
class convlab2.e2e.sequicity.metric.CamRestEvaluator(result_path)

Bases: convlab2.e2e.sequicity.metric.GenericEvaluator

get_entities(entity_data)
match_metric(data, sub='match', raw_data=None)
run_metrics()
success_f1_metric(data, sub='successf1')
class convlab2.e2e.sequicity.metric.GenericEvaluator(result_path)

Bases: object

bleu_metric(data, type='bleu')
clean(s)
dump()
pack_dial(data)
read_result_data()
run_metrics()
class convlab2.e2e.sequicity.metric.KvretEvaluator(result_path)

Bases: convlab2.e2e.sequicity.metric.GenericEvaluator

clean_by_intent(s, i)
constraint_same(truth_cons, gen_cons)
match_rate_metric(data, sub='match', bspans='./data/kvret/test.bspan.pkl')
run_metrics()
success_f1_metric(data, sub='successf1')
class convlab2.e2e.sequicity.metric.MultiWozEvaluator(result_path)

Bases: convlab2.e2e.sequicity.metric.GenericEvaluator

get_entities(entity_data)
match_metric(data, sub='match', raw_data=None)
run_metrics()
success_f1_metric(data, sub='successf1')
convlab2.e2e.sequicity.metric.metric_handler()
convlab2.e2e.sequicity.metric.report(func)
convlab2.e2e.sequicity.metric.setsim(a, b)
convlab2.e2e.sequicity.metric.setsub(a, b)
convlab2.e2e.sequicity.metric.similar(a, b)

convlab2.e2e.sequicity.model module

class convlab2.e2e.sequicity.model.Model(dataset)

Bases: object

count_params()
eval(data='test')
freeze_module(module)
interact()
load_glove_embedding(freeze=False)
load_model(path=None)
predict(usr, kw_ret)
reinforce_tune()
save_model(epoch, path=None)
train()
training_adjust(epoch)
unfreeze_module(module)
validate(data='dev')
convlab2.e2e.sequicity.model.main(arg_mode=None, arg_model=None)

convlab2.e2e.sequicity.reader module

class convlab2.e2e.sequicity.reader.CamRest676Reader

Bases: convlab2.e2e.sequicity.reader._ReaderBase

class convlab2.e2e.sequicity.reader.KvretReader

Bases: convlab2.e2e.sequicity.reader._ReaderBase

db_degree(constraints, items)
db_degree_handler(z_samples, idx=None, *args, **kwargs)

returns degree of database searching and it may be used to control further decoding. One hot vector, indicating the number of entries found: [0, 1, 2, 3, 4, >=5] :param z_samples: nested list of B * [T] :return: an one-hot control numpy control vector

class convlab2.e2e.sequicity.reader.MultiWozReader

Bases: convlab2.e2e.sequicity.reader._ReaderBase

wrap_result(turn_batch, gen_m, gen_z, eos_syntax=None, prev_z=None)

wrap generated results :param gen_z: :param gen_m: :param turn_batch: dict of [i_1,i_2,…,i_b] with keys :return:

convlab2.e2e.sequicity.reader.clean_replace(s, r, t, forward=True, backward=False)
convlab2.e2e.sequicity.reader.get_glove_matrix(vocab, initial_embedding_np)

return a glove embedding matrix :param self: :param glove_file: :param initial_embedding_np: :return: np array of [V,E]

convlab2.e2e.sequicity.reader.pad_sequences(sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.0)

convlab2.e2e.sequicity.tsd_net module

class convlab2.e2e.sequicity.tsd_net.Attn(hidden_size)

Bases: torch.nn.modules.module.Module

forward(hidden, encoder_outputs, normalize=True)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

score(hidden, encoder_outputs)
class convlab2.e2e.sequicity.tsd_net.BSpanDecoder(embed_size, hidden_size, vocab_size, dropout_rate)

Bases: torch.nn.modules.module.Module

forward(u_enc_out, z_tm1, last_hidden, u_input_np, pv_z_enc_out, prev_z_input_np, u_emb, pv_z_emb)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class convlab2.e2e.sequicity.tsd_net.ResponseDecoder(embed_size, hidden_size, vocab_size, degree_size, dropout_rate, gru, proj, emb, vocab)

Bases: torch.nn.modules.module.Module

forward(z_enc_out, u_enc_out, u_input_np, m_t_input, degree_input, last_hidden, z_input_np)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_sparse_selective_input(x_input_np)
class convlab2.e2e.sequicity.tsd_net.SimpleDynamicEncoder(input_size, embed_size, hidden_size, n_layers, dropout)

Bases: torch.nn.modules.module.Module

forward(input_seqs, input_lens, hidden=None)

forward procedure. No need for inputs to be sorted :param input_seqs: Variable of [T,B] :param hidden: :param input_lens: numpy array of len for each input sequence :return:

class convlab2.e2e.sequicity.tsd_net.TSD(embed_size, hidden_size, vocab_size, degree_size, layer_num, dropout_rate, z_length, max_ts, beam_search=False, teacher_force=100, **kwargs)

Bases: torch.nn.modules.module.Module

beam_search_decode(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
beam_search_decode_single(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
bspan_decoder(u_enc_out, z_tm1, last_hidden, u_input_np, pv_z_enc_out, prev_z_input_np, u_emb, pv_z_emb)
finish_episode(log_probas, saved_rewards)
forward(u_input, u_input_np, m_input, m_input_np, z_input, u_len, m_len, turn_states, degree_input, mode, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

forward_turn(u_input, u_len, turn_states, mode, degree_input, u_input_np, m_input_np=None, m_input=None, m_len=None, z_input=None, **kwargs)

compute required outputs for a single dialogue turn. Turn state{Dict} will be updated in each call. :param u_input_np: :param m_input_np: :param u_len: :param turn_states: :param is_train: :param u_input: [T,B] :param m_input: [T,B] :param z_input: [T,B] :return:

get_req_slots(bspan_index)
greedy_decode(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
reward(m_tm1, decoded, bspan_index)

The setting of the reward function is heuristic. It can be better optimized. :param m_tm1: :param decoded: :param bspan_index: :return:

sampling_decode(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
sampling_decode_single(pz_dec_outs, u_enc_out, m_tm1, u_input_np, last_hidden, degree_input, bspan_index)
self_adjust(epoch)
supervised_loss(pz_proba, pm_dec_proba, z_input, m_input)
convlab2.e2e.sequicity.tsd_net.cuda_(var)
convlab2.e2e.sequicity.tsd_net.get_sparse_input_aug(x_input_np)

sparse input of :param x_input_np: [T,B] :return: Numpy array: [B,T,aug_V]

convlab2.e2e.sequicity.tsd_net.init_gru(gru)
convlab2.e2e.sequicity.tsd_net.nan(v)
convlab2.e2e.sequicity.tsd_net.toss_(p)

Module contents