tatk.nlg.template.multiwoz package¶
Submodules¶
tatk.nlg.template.multiwoz.evaluate module¶
Evaluate NLG models on utterances of Multiwoz test dataset Metric: dataset level BLEU-4, slot error rate Usage: python evaluate.py [usr|sys|all]
-
tatk.nlg.template.multiwoz.evaluate.
get_bleu4
(dialog_acts, golden_utts, gen_utts)¶
tatk.nlg.template.multiwoz.nlg module¶
-
class
tatk.nlg.template.multiwoz.nlg.
TemplateNLG
(is_user, mode='manual')¶ Bases:
tatk.nlg.nlg.NLG
-
__init__
(is_user, mode='manual')¶ - Args:
- is_user:
if dialog_act from user or system
- mode:
auto: templates extracted from data without manual modification, may have no match;
manual: templates with manual modification, sometimes verbose;
auto_manual: use auto templates first. When fails, use manual templates.
both template are dict, *_template[dialog_act][slot] is a list of templates.
-
generate
(dialog_acts)¶ NLG for Multiwoz dataset
- Args:
- dialog_acts:
{da1:[[slot1,value1],…], da2:…}
- Returns:
generated sentence
-
-
tatk.nlg.template.multiwoz.nlg.
example
()¶
-
tatk.nlg.template.multiwoz.nlg.
read_json
(filename)¶