convlab2.e2e.damd.multiwoz package¶
Submodules¶
convlab2.e2e.damd.multiwoz.clean_dataset module¶
-
convlab2.e2e.damd.multiwoz.clean_dataset.
clean_slot_values
(domain, slot, value)¶
-
convlab2.e2e.damd.multiwoz.clean_dataset.
clean_text
(text)¶
-
convlab2.e2e.damd.multiwoz.clean_dataset.
clean_time
(utter)¶
convlab2.e2e.damd.multiwoz.config module¶
convlab2.e2e.damd.multiwoz.damd module¶
Created on Mon Mar 23 21:03:36 2020
@author: truthless
-
class
convlab2.e2e.damd.multiwoz.damd.
Damd
(model_file='https://convlab.blob.core.windows.net/convlab-2/damd_multiwoz.zip', name='DAMD')¶ Bases:
convlab2.dialog_agent.agent.Agent
-
add_torch_input
(inputs, first_turn=False)¶
-
init_session
()¶ Reset the class variables to prepare for a new session.
-
load_model
(path=None)¶
-
response
(usr)¶ Generate agent response given user input.
- Args:
- observation (str):
The input to the agent.
- Returns:
- response (str):
The response generated by the agent.
-
convlab2.e2e.damd.multiwoz.damd_net module¶
-
class
convlab2.e2e.damd.multiwoz.damd_net.
ActSelectionModel
(hidden_size, length, nbest)¶ Bases:
torch.nn.modules.module.Module
-
forward
(hiddens_batch)¶ [summary] :param hiddens_batch: [B, nbest, T, H] :param decoded_batch: [B, nbest, T]
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
ActSpanDecoder
(embedding, vocab_size_oov, Wgen=None, dropout=0.0)¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, hidden_states, dec_last_w, dec_last_h, first_turn, first_step, bidx=None, mode='train')¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
get_probs
(inputs, hidden_states, dec_hs, first_turn=False, bidx=None)¶ [summary] :param dec_hs: [B, Tdec, H] :param dec_ws: word index [B, Tdec] :param dec_hs: decoder hidden states [B, Tdec, H] :returns: [description]
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
Attn
(hidden_size)¶ Bases:
torch.nn.modules.module.Module
-
forward
(hidden, encoder_outputs, mask=None)¶ - Parameters
hidden – tensor of size [n_layer, B, H]
encoder_outputs – tensor of size [B,T, H]
-
score
(hidden, encoder_outputs)¶
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
BeamSearchNode
(hiddenstate, previousNode, wordId, logProb, length, rank=None)¶ Bases:
object
-
eval
(alpha=0)¶
-
print_node
()¶
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
BeliefSpanDecoder
(embedding, vocab_size_oov, bspn_mode, Wgen=None, dropout=0.0)¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, hidden_states, dec_last_w, dec_last_h, first_turn, first_step, mode='train')¶ [summary] :param inputs: inputs dict :param hidden_states: hidden states dict, size [B, T, H] :param dec_last_w: word index of last decoding step :param dec_last_h: hidden state of last decoding step :param first_turn: [description], defaults to False :returns: [description]
-
get_probs
(inputs, hidden_states, dec_hs, first_turn=False)¶
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
Copy
(hidden_size, copy_weight=1.0)¶ Bases:
torch.nn.modules.module.Module
-
forward
(enc_out_hs, dec_hs)¶ get unnormalized copy score :param enc_out_hs: [B, Tenc, H] :param dec_hs: [B, Tdec, H] testing: Tdec=1 :return: raw_cp_score of each position, size [B, Tdec, Tenc]
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
DAMD
(reader)¶ Bases:
torch.nn.modules.module.Module
-
RL_forward
(inputs, decoded, hiddens_batch, decoded_batch)¶ [summary] :param hiddens_batch: [B, nbest, T, H] :param decoded_batch: [B, nbest, T]
-
RL_train
(inputs, hs, hiddens_batch, decoded_batch, first_turn)¶ [summary] :param hiddens_batch: [B, nbest, T, H] :param decoded_batch: [B, nbest, T]
-
addActSelection
()¶
-
aspn_selection
(inputs, decoded, hiddens_batch, decoded_batch)¶ [summary] :param hiddens_batch: [B, nbest, T, H] :param decoded_batch: [B, nbest, T]
-
beam_decode
(name, init_hidden, first_turn, inputs, hidden_states, decoded)¶
-
forward
(inputs, hidden_states, first_turn, mode)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
greedy_decode
(name, init_hidden, first_turn, inputs, hidden_states, decoded)¶
-
sampling_decode
(name, init_hidden, first_turn, inputs, hidden_states, decoded)¶
-
supervised_loss
(inputs, probs)¶
-
test_forward
(inputs, hs, first_turn)¶
-
train_forward
(inputs, hidden_states, first_turn)¶ compute required outputs for a single dialogue turn. Turn state{Dict} will be updated in each call.
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
DomainSpanDecoder
(embedding, vocab_size_oov, Wgen=None, dropout=0.0)¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, hidden_states, dec_last_w, dec_last_h, first_turn, first_step, mode='train')¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
get_probs
(inputs, hidden_states, dec_hs, first_turn=False)¶
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
LayerNormalization
(hidden_size, eps=0.001)¶ Bases:
torch.nn.modules.module.Module
Layer normalization module
-
forward
(z)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
MultiLayerGRUwithLN
(input_size, hidden_size, layer_num=1, bidirec=False, layer_norm=False, skip_connect=False, dropout=0.0)¶ Bases:
torch.nn.modules.module.Module
multi-layer GRU with layer normalization
-
forward
(inputs, hidden=None)¶ [summary]
- Parameters
inputs – tensor of size [B, T, H]
hidden – tensor of size [n_layer*bi-direc,B,H]
- Returns
in_l: tensor of size [B, T, H * bi-direc] hs: tensor of size [n_layer * bi-direc,B,H]
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
ResponseDecoder
(embedding, vocab_size_oov, Wgen=None, dropout=0.0)¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, hidden_states, dec_last_w, dec_last_h, first_turn, first_step, mode='train')¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
get_probs
(inputs, hidden_states, dec_hs, first_turn=False)¶ [summary] :param dec_hs: [B, Tdec, H] :param dec_ws: word index [B, Tdec] :param dec_hs: decoder hidden states [B, Tdec, H] :returns: [description]
-
-
class
convlab2.e2e.damd.multiwoz.damd_net.
biGRUencoder
(embedding)¶ Bases:
torch.nn.modules.module.Module
-
forward
(input_seqs, hidden=None)¶ forward procedure. No need for inputs to be sorted :param input_seqs: Variable of [B,T] :param hidden: :return: outputs [B,T,H], hidden [n_layer*bi-direc,B,H]
-
-
convlab2.e2e.damd.multiwoz.damd_net.
cuda_
(var)¶
-
convlab2.e2e.damd.multiwoz.damd_net.
get_final_scores
(raw_scores, word_onehot_input, input_idx_oov, vocab_size_oov)¶ - Parameters
raw_scores – list of tensor of size [B, Tdec, V], [B, Tdec, Tenc1], [B, Tdec, Tenc1] …
word_onehot_input – list of nparray of size [B, Tenci, V+Tenci]
input_idx_oov – list of nparray of size [B, Tenc]
vocab_size_oov –
- Returns
tensor of size [B, Tdec, vocab_size_oov]
-
convlab2.e2e.damd.multiwoz.damd_net.
get_one_hot_input
(x_input_np)¶ sparse input of :param x_input_np: [B, Tenc] :return: tensor: [B,Tenc, V+Tenc]
-
convlab2.e2e.damd.multiwoz.damd_net.
init_gru
(gru)¶
-
convlab2.e2e.damd.multiwoz.damd_net.
label_smoothing
(labels, smoothing_rate, vocab_size_oov)¶
-
convlab2.e2e.damd.multiwoz.damd_net.
update_input
(name, inputs)¶
convlab2.e2e.damd.multiwoz.db_ops module¶
-
class
convlab2.e2e.damd.multiwoz.db_ops.
MultiWozDB
(dir, db_paths)¶ Bases:
object
-
addBookingPointer
(constraint, domain, book_state)¶ Add information about availability of the booking option.
-
addDBPointer
(domain, match_num, return_num=False)¶ Create database pointer for all related domains.
-
get_match_num
(constraints, return_entry=False)¶ Create database pointer for all related domains.
-
oneHotVector
(domain, num)¶ Return number of available entities for particular domain.
-
pointerBack
(vector, domain)¶
-
queryJsons
(domain, constraints, exactly_match=True, return_name=False)¶ Returns the list of entities for a given domain based on the annotation of the belief state constraints: dict e.g. {‘pricerange’: ‘cheap’, ‘area’: ‘west’}
-
convlab2.e2e.damd.multiwoz.ontology module¶
convlab2.e2e.damd.multiwoz.reader module¶
-
class
convlab2.e2e.damd.multiwoz.reader.
MultiWozReader
¶ Bases:
object
-
aspan_to_act_list
(aspan)¶
-
bspan_to_DBpointer
(bspan, turn_domain)¶
-
bspan_to_constraint_dict
(bspan, bspn_mode='bspn')¶
-
delex_by_valdict
(text)¶
-
dspan_to_domain
(dspan)¶
-
prepare_input_np
(u, u_delex)¶
-
preprocess_utterance
(user)¶
-
reset
()¶
-
restore
(resp, domain, constraint_dict)¶
-
wrap_result
(result_dict, eos_syntax=None)¶
-
convlab2.e2e.damd.multiwoz.utils module¶
-
class
convlab2.e2e.damd.multiwoz.utils.
Vocab
(vocab_size=0)¶ Bases:
object
-
add_word
(word)¶
-
construct
()¶
-
decode
(idx, indicate_oov=False)¶
-
encode
(word, include_oov=True)¶
-
has_word
(word)¶
-
load_vocab
(vocab_path)¶
-
nl_decode
(l, eos=None)¶
-
oov_idx_map
(idx)¶
-
save_vocab
(vocab_path)¶
-
sentence_decode
(index_list, eos=None, indicate_oov=False)¶
-
sentence_encode
(word_list)¶
-
sentence_oov_map
(index_list)¶
-
-
convlab2.e2e.damd.multiwoz.utils.
f1_score
(label_list, pred_list)¶
-
convlab2.e2e.damd.multiwoz.utils.
get_glove_matrix
(glove_path, vocab, initial_embedding_np)¶ return a glove embedding matrix :param self: :param glove_file: :param initial_embedding_np: :return: np array of [V,E]
-
convlab2.e2e.damd.multiwoz.utils.
padSeqs
(sequences, maxlen=None, truncated=False, pad_method='post', trunc_method='pre', dtype='int32', value=0.0)¶
-
convlab2.e2e.damd.multiwoz.utils.
position_encoding_init
(self, n_position, d_pos_vec)¶
-
convlab2.e2e.damd.multiwoz.utils.
py2np
(list)¶
-
convlab2.e2e.damd.multiwoz.utils.
write_dict
(fn, dic)¶