Need help with task_oriented_dialogue_as_dataflow_synthesis?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

microsoft
229 Stars 51 Forks MIT License 47 Commits 9 Opened issues

Description

Code to reproduce experiments in the paper "Task-Oriented Dialogue as Dataflow Synthesis" (TACL 2020).

Services available

!
?

Need anything else?

Contributors list

Task-Oriented Dialogue as Dataflow Synthesis

License: MIT

This repository contains tools and instructions for reproducing the experiments in the paper Task-Oriented Dialogue as Dataflow Synthesis (TACL 2020). If you use any source code or data included in this toolkit in your work, please cite the following paper.

bib
@article{SMDataflow2020,
  author = {{Semantic Machines} and Andreas, Jacob and Bufe, John and Burkett, David and Chen, Charles and Clausman, Josh and Crawford, Jean and Crim, Kate and DeLoach, Jordan and Dorner, Leah and Eisner, Jason and Fang, Hao and Guo, Alan and Hall, David and Hayes, Kristin and Hill, Kellie and Ho, Diana and Iwaszuk, Wendy and Jha, Smriti and Klein, Dan and Krishnamurthy, Jayant and Lanman, Theo and Liang, Percy and Lin, Christopher H. and Lintsbakh, Ilya and McGovern, Andy and Nisnevich, Aleksandr and Pauls, Adam and Petters, Dmitrij and Read, Brent and Roth, Dan and Roy, Subhro and Rusak, Jesse and Short, Beth and Slomin, Div and Snyder, Ben and Striplin, Stephon and Su, Yu and Tellman, Zachary and Thomson, Sam and Vorobev, Andrei and Witoszko, Izabela and Wolfe, Jason and Wray, Abby and Zhang, Yuchen and Zotov, Alexander},
  title = {Task-Oriented Dialogue as Dataflow Synthesis},
  journal = {Transactions of the Association for Computational Linguistics},
  volume = {8},
  pages = {556--571},
  year = {2020},
  month = sep,
  url = {https://doi.org/10.1162/tacl_a_00333},
  abstract = {We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph. A dialogue agent maps each user utterance to a program that extends this graph. Programs include metacomputation operators for reference and revision that reuse dataflow fragments from previous turns. Our graph-based state enables the expression and manipulation of complex user intents, and explicit metacomputation makes these intents easier for learned models to predict. We introduce a new dataset, SMCalFlow, featuring complex dialogues about events, weather, places, and people. Experiments show that dataflow graphs and metacomputation substantially improve representability and predictability in these natural dialogues. Additional experiments on the MultiWOZ dataset show that our dataflow representation enables an otherwise off-the-shelf sequence-to-sequence model to match the best existing task-specific state tracking model. The SMCalFlow dataset, code for replicating experiments, and a public leaderboard are available at \url{https://www.microsoft.com/en-us/research/project/dataflow-based-dialogue-semantic-machines}.},
}

Understand SMCalFlow Programs

Please read this document to understand the syntax of SMCalFlow programs, and read this document to understand their semantics.

Install

# (Recommended) Create a virtual environment
virtualenv --python=python3 env
source env/bin/activate

Install the sm-dataflow package and its core dependencies

pip install git+https://github.com/microsoft/task_oriented_dialogue_as_dataflow_synthesis.git

Download the spaCy model for tokenization

python -m spacy download en_core_web_md-2.2.0 --direct

Install OpenNMT-py and PyTorch for training and running the models

pip install OpenNMT-py==1.0.0 torch==1.4.0

  • Our experiments used OpenNMT-py 1.0.0 with PyTorch 1.4.0. Other versions are not tested. You can skip these two packages if you don't need to train or run the models.

SMCalFlow Experiments

Follow the steps below to reproduce the results reported in the paper (Table 2).

NOTE: We highly recommend following the instructions for the leaderboard to report your results for consistency. If you use your own evaluation script, please pay attention to the notes in Step 2 and Step 7.

  1. Download and unzip the SMCalFlow 1.0 dataset. ```bash dataflowdialoguesdir="output/dataflowdialogues" mkdir -p "${dataflowdialogues_dir}"

cd "${dataflowdialoguesdir}" # Download the dataset

smcalflow.full.data.tgz
or
smcalflow.inlined.data.tgz
# The
PATH_TO_DATA_TGZ
is the path to the tgz file of the corresponding dataset. tar -xvzf PATHTODATATGZ
   * SMCalFlow 1.0 links
     * [smcalflow.full.data.tgz](https://smresearchstorage.blob.core.windows.net/smcalflow-public/smcalflow.full.data.tgz)
     * [smcalflow.inlined.data.tgz](https://smresearchstorage.blob.core.windows.net/smcalflow-public/smcalflow.inlined.data.tgz)
   * SMCalFlow 2.0 can be found under the [datasets](./datasets) folder.
   * The dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
2. Compute data statistics:
bash dataflow
dialoguesstatsdir="output/dataflowdialoguesstats" mkdir -p "${dataflowdialoguesstatsdir}" python -m dataflow.analysis.computedatastatistics \ --dataflowdialoguesdir ${dataflowdialoguesdir} \ --subset train valid \ --outdir ${dataflowdialoguesstatsdir} ``` * Basic statistics
    |           | num_dialogues  | num_turns | num_kept_turns | num_skipped_turns | num_refer_turns | num_revise_turns |
    | --------- | :-:            | :-:       | :-:            | :-:               | :-:             | :-:              |
    | **train** | 32,647         | 133,821   | 121,200        | 12,621            | 33,011          | 9,315            |
    | **valid** | 3,649          | 14,757    | 13,499         | 1,258             | 3,544           | 1,052            |
    | **test**  | 5,211          | 22,012    | 21,224         | 7,88              | 8,965           | 3,315            |
    | **all**   | 41,517         | 170,590   | 155,923        | 14,667            | 45,520          | 13,682           |

* We currently do not release the test set, but we report the data statistics here.
* **NOTE**: There are a small number of turns (`num_skipped_turns` in the table) whose sole purpose is to establish dialogue context and should not be directly trained or tested on. The dataset statistics reported in the paper are based on non-skipped turns only. 

  1. Prepare text data for the OpenNMT toolkit.

    bash
    onmt_text_data_dir="output/onmt_text_data"
    mkdir -p "${onmt_text_data_dir}"
    for subset in "train" "valid"; do
        python -m dataflow.onmt_helpers.create_onmt_text_data \
            --dialogues_jsonl ${dataflow_dialogues_dir}/${subset}.dataflow_dialogues.jsonl \
            --num_context_turns 2 \
            --include_program \
            --include_described_entities \
            --onmt_text_data_outbase ${onmt_text_data_dir}/${subset}
    done
    
    • We use
      --include_program
      to add the gold program of the context turns.
    • We use
      --include_described_entities
      to add the entities (e.g.,
      [email protected]
      ) described in the generation outcome for the context turns. These entities mentioned in the context turns can appear in the "inlined" programs for the current turn, and thus, we include them in the source sequence so that the seq2seq model can produce such tokens via a copy mechanism.
    • You can vary the number of context turns by changing
      --num_context_turns
      .
  2. Compute statistics for the created OpenNMT text data.

    bash
    onmt_data_stats_dir="output/onmt_data_stats"
    mkdir -p "${onmt_data_stats_dir}"
    python -m dataflow.onmt_helpers.compute_onmt_data_stats \
        --text_data_dir ${onmt_text_data_dir} \
        --suffix src src_tok tgt \
        --subset train valid \
        --outdir ${onmt_data_stats_dir}
    
  3. Train OpenNMT models. You can also skip this step and instead download the trained model from the table below. ```bash onmtbinarizeddatadir="output/onmtbinarizeddata" mkdir -p "${onmtbinarizeddatadir}"

    srctokmaxntokens=$(jq '."100"' ${onmtdatastatsdir}/train.srctok.ntokensstats.json) tgtmaxntokens=$(jq '."100"' ${onmtdatastatsdir}/train.tgt.ntokensstats.json)

    create OpenNMT binarized data

    onmtpreprocess \ --dynamicdict \ --trainsrc ${onmttextdatadir}/train.srctok \ --traintgt ${onmttextdatadir}/train.tgt \ --validsrc ${onmttextdatadir}/valid.srctok \ --validtgt ${onmttextdatadir}/valid.tgt \ --srcseqlength ${srctokmaxntokens} \ --tgtseqlength ${tgtmaxntokens} \ --srcwordsminfrequency 0 \ --tgtwordsminfrequency 0 \ --savedata ${onmtbinarizeddata_dir}/data

    extract pretrained Glove 840B embeddings (https://nlp.stanford.edu/projects/glove/)

    glove840bdir="output/glove840b" mkdir -p "${glove840bdir}" wget -O ${glove840bdir}/glove.840B.300d.zip http://nlp.stanford.edu/data/glove.840B.300d.zip unzip ${glove840bdir}/glove.840B.300d.zip -d ${glove840b_dir}

    onmtembeddingsdir="output/onmtembeddings" mkdir -p "${onmtembeddingsdir}" python -m dataflow.onmthelpers.embeddingstotorch \ -embfileboth ${glove840bdir}/glove.840B.300d.txt \ -dictfile ${onmtbinarizeddatadir}/data.vocab.pt \ -outputfile ${onmtembeddings_dir}/embeddings

    train OpenNMT models

    onmtmodelsdir="output/onmtmodels" mkdir -p "${onmtmodels_dir}"

    batchsize=64 trainnumdatapoints=$(jq '.train' ${onmtdatastatsdir}/nexamples.json)

    validate approximately at each epoch

    validsteps=$(python3 -c "from math import ceil; print(ceil(${trainnumdatapoints}/${batchsize}))")

    onmttrain \ --encodertype brnn \ --decodertype rnn \ --rnntype LSTM \ --globalattention general \ --globalattentionfunction softmax \ --generatorfunction softmax \ --copyattntype general \ --copyattn \ --seed 1 \ --optim adam \ --learningrate 0.001 \ --earlystopping 2 \ --batchsize ${batchsize} \ --validbatchsize 8 \ --validsteps ${validsteps} \ --savecheckpointsteps ${validsteps} \ --data ${onmtbinarizeddatadir}/data \ --prewordvecsenc ${onmtembeddingsdir}/embeddings.enc.pt \ --prewordvecsdec ${onmtembeddingsdir}/embeddings.dec.pt \ --wordvecsize 300 \ --attentiondropout 0 \ --dropout 0.5 \ --layers ??? \ --rnnsize ??? \ --gpuranks 0 \ --worldsize 1 \ --savemodel ${onmtmodelsdir}/checkpoint ```

    • Hyperparameters for models reported in the Table 2 in the paper.

      | |

      --layers
      |
      --rnn_size
      | model | | --------- | :-: | :-: | :-: | | dataflow | 2 | 384 | link | | inline | 3 | 384 | link |
  4. Make predictions using a trained OpenNMT model. You need to replace the

    checkpoint_last.pt
    in the following script with the final model you get from the previous step. ```bash onmttranslateoutdir="output/onmttranslateoutput" mkdir -p "${onmttranslateoutdir}"

    onmtmodelpt="${onmtmodelsdir}/checkpointlast.pt" nbest=5 tgtmaxntokens=$(jq '."100"' ${onmtdatastatsdir}/train.tgt.ntokens_stats.json)

    predict programs using a trained OpenNMT model

    onmttranslate \ --model ${onmtmodelpt} \ --maxlength ${tgtmaxntokens} \ --src ${onmttextdatadir}/valid.srctok \ --replaceunk \ --nbest ${nbest} \ --batchsize 8 \ --beamsize 10 \ --gpu 0 \ --reporttime \ --output ${onmttranslate_outdir}/valid.nbest ```

  5. Compute the exact-match accuracy (taking into account whether the

    program_execution_oracle.refer_are_correct
    is
    true
    ). ```bash evaluationoutdir="output/evaluationoutput" mkdir -p "${evaluation_outdir}"

    create the prediction report

    python -m dataflow.onmthelpers.createonmtpredictionreport \ --dialoguesjsonl ${dataflowdialoguesdir}/valid.dataflowdialogues.jsonl \ --datumidjsonl ${onmttextdatadir}/valid.datumid \ --srctxt ${onmttextdatadir}/valid.srctok \ --reftxt ${onmttextdatadir}/valid.tgt \ --nbesttxt ${onmttranslateoutdir}/valid.nbest \ --nbest ${nbest} \ --outbase ${evaluation_outdir}/valid

    evaluate the predictions (all turns)

    python -m dataflow.onmthelpers.evaluateonmtpredictions \ --predictionreporttsv ${evaluationoutdir}/valid.predictionreport.tsv \ --scoresjson ${evaluation_outdir}/valid.all.scores.json

    evaluate the predictions (refer turns)

    python -m dataflow.onmthelpers.evaluateonmtpredictions \ --predictionreporttsv ${evaluationoutdir}/valid.predictionreport.tsv \ --datumidsjson ${dataflowdialoguesstatsdir}/valid.referturnids.jsonl \ --scoresjson ${evaluationoutdir}/valid.refer_turns.scores.json

    evaluate the predictions (revise turns)

    python -m dataflow.onmthelpers.evaluateonmtpredictions \ --predictionreporttsv ${evaluationoutdir}/valid.predictionreport.tsv \ --datumidsjson ${dataflowdialoguesstatsdir}/valid.reviseturnids.jsonl \ --scoresjson ${evaluationoutdir}/valid.revise_turns.scores.json ```

    • NOTE: The numbers reported using the scripts above should match those reported in Table 2 in the paper. The leaderboard has used a slightly different evaluation script that canonicalizes both the gold and predicted programs, and thus, the accuracy would be slightly higher (e.g., 0.665 vs. 0.668 on the test set). To obtain the leaderboard results, please add
      --use_leaderboard_metric
      when running
      python -m dataflow.onmt_helpers.create_onmt_prediction_report
      to create the report.
  6. Calculate the statistical significance for two different experiments.

    bash
    analysis_outdir="output/analysis_output"
    mkdir -p "${analysis_outdir}"
    python -m dataflow.analysis.calculate_statistical_significance \
        --exp0_prediction_report_tsv ${exp0_evaluation_outdir}/valid.prediction_report.tsv \
        --exp1_prediction_report_tsv ${exp1_evaluation_outdir}/valid.prediction_report.tsv \
        --scores_json ${analysis_outdir}/exp0_vs_exp1.valid.scores.json
    
    • The
      exp0_evaluation_outdir
      and
      exp1_evaluation_outdir
      are the
      evaluation_outdir
      in Step 7 for corresponding experiments.
    • You can also provide
      --datum_ids_jsonl
      to carry out the significance test on a subset of turns.

MultiWOZ Experiments

  1. Download the MultiWoZ dataset and convert it to dataflow programs. ```bash

    creates TRADE-processed dialogues

    rawtradedialoguesdir="output/tradedialogues" mkdir -p "${rawtradedialoguesdir}" python -m dataflow.multiwoz.tradedst.createdata \ --usemultiwoz21 \ --outputdir ${rawtradedialoguesdir}

    patch TRADE dialogues

    patchedtradedialoguesdir="output/patchedtradedialogues" mkdir -p "${patchedtradedialoguesdir}" for subset in "train" "dev" "test"; do python -m dataflow.multiwoz.patchtradedialogues \ --tradedatafile ${rawtradedialoguesdir}/${subset}dials.json \ --outbase ${patchedtradedialoguesdir}/${subset} done ln -sr ${patchedtradedialoguesdir}/devdials.json ${patchedtradedialoguesdir}/valid_dials.json

    create dataflow programs

    dataflowdialoguesdir="output/dataflowdialogues" mkdir -p "${dataflowdialoguesdir}" for subset in "train" "valid" "test"; do python -m dataflow.multiwoz.createprograms \ --tradedatafile ${patchedtradedialoguesdir}/${subset}dials.json \ --outbase ${dataflowdialoguesdir}/${subset} done ```

    • To create programs that inline
      refer
      calls, add
      --no_refer
      when running the
      dataflow.multiwoz.create_programs
      command.
    • To create programs that inline both
      refer
      and
      revise
      calls, add
      --no_refer --no_revise
      .
  2. Prepare text data for the OpenNMT toolkit.

    bash
    onmt_text_data_dir="output/onmt_text_data"
    mkdir -p "${onmt_text_data_dir}"
    for subset in "train" "valid" "test"; do
        python -m dataflow.onmt_helpers.create_onmt_text_data \
            --dialogues_jsonl ${dataflow_dialogues_dir}/${subset}.dataflow_dialogues.jsonl \
            --num_context_turns 2 \
            --include_agent_utterance \
            --onmt_text_data_outbase ${onmt_text_data_dir}/${subset}
    done
    
    • We use
      --include_agent_utterance
      following the setup in TRADE (Wu et al., 2019).
    • You can vary the number of context turns by changing
      --num_context_turns
      .
  3. Compute statistics for the created OpenNMT text data.

    bash
    onmt_data_stats_dir="output/onmt_data_stats"
    mkdir -p "${onmt_data_stats_dir}"
    python -m dataflow.onmt_helpers.compute_onmt_data_stats \
        --text_data_dir ${onmt_text_data_dir} \
        --suffix src src_tok tgt \
        --subset train valid test \
        --outdir ${onmt_data_stats_dir}
    
  4. Train OpenNMT models. You can also skip this step and instead download the trained models from the table below. ```bash onmtbinarizeddatadir="output/onmtbinarizeddata" mkdir -p "${onmtbinarizeddatadir}"

    create OpenNMT binarized data

    srctokmaxntokens=$(jq '."100"' ${onmtdatastatsdir}/train.srctok.ntokensstats.json) tgtmaxntokens=$(jq '."100"' ${onmtdatastatsdir}/train.tgt.ntokensstats.json)

    onmtpreprocess \ --dynamicdict \ --trainsrc ${onmttextdatadir}/train.srctok \ --traintgt ${onmttextdatadir}/train.tgt \ --validsrc ${onmttextdatadir}/valid.srctok \ --validtgt ${onmttextdatadir}/valid.tgt \ --srcseqlength ${srctokmaxntokens} \ --tgtseqlength ${tgtmaxntokens} \ --srcwordsminfrequency 0 \ --tgtwordsminfrequency 0 \ --savedata ${onmtbinarizeddata_dir}/data

    extract pretrained Glove 6B embeddings

    glove6bdir="output/glove6b" mkdir -p "${glove6bdir}" wget -O ${glove6bdir}/glove.6B.zip http://nlp.stanford.edu/data/glove.6B.zip unzip ${glove6bdir}/glove.6B.zip -d ${glove6b_dir}

    onmtembeddingsdir="output/onmtembeddings" mkdir -p "${onmtembeddingsdir}" python -m dataflow.onmthelpers.embeddingstotorch \ -embfileboth ${glove6bdir}/glove.6B.300d.txt \ -dictfile ${onmtbinarizeddatadir}/data.vocab.pt \ -outputfile ${onmtembeddings_dir}/embeddings

    train OpenNMT models

    onmtmodelsdir="output/onmtmodels" mkdir -p "${onmtmodels_dir}"

    batchsize=64 trainnumdatapoints=$(jq '.train' ${onmtdatastatsdir}/nexamples.json)

    approximately validate at each epoch

    validsteps=$(python3 -c "from math import ceil; print(ceil(${trainnumdatapoints}/${batchsize}))")

    onmttrain \ --encodertype brnn \ --decodertype rnn \ --rnntype LSTM \ --globalattention general \ --globalattentionfunction softmax \ --generatorfunction softmax \ --copyattntype general \ --copyattn \ --seed 1 \ --optim adam \ --learningrate 0.001 \ --earlystopping 2 \ --batchsize ${batchsize} \ --validbatchsize 8 \ --validsteps ${validsteps} \ --savecheckpointsteps ${validsteps} \ --data ${onmtbinarizeddatadir}/data \ --prewordvecsenc ${onmtembeddingsdir}/embeddings.enc.pt \ --prewordvecsdec ${onmtembeddingsdir}/embeddings.dec.pt \ --wordvecsize 300 \ --attentiondropout 0 \ --dropout ??? \ --layers ??? \ --rnnsize ??? \ --gpuranks 0 \ --worldsize 1 \ --savemodel ${onmtmodelsdir}/checkpoint ```

    • Hyperparameters for models reported in the Table 3 in the paper.

      | |

      --dropout
      |
      --layers
      |
      --rnn_size
      | model | | --------- | :-: | :-: | :-: | :-: | | dataflow (
      --num_context_turns 2
      ) | 0.7 | 2 | 384 | link | | inline refer (
      --num_context_turns 4
      ) | 0.3 | 3 | 320 | link | | inline both (
      --num_context_turns 10
      ) | 0.7 | 2 | 320 | link |
  5. Make predictions using a trained OpenNMT model. You need to replace the

    checkpoint_last.pt
    in the following script with the actual model you get from the previous step. ```bash onmttranslateoutdir="output/onmttranslateoutput" mkdir -p "${onmttranslateoutdir}"

    onmtmodelpt="${onmtmodelsdir}/checkpointlast.pt" nbest=5 tgtmaxntokens=$(jq '."100"' ${onmtdatastatsdir}/train.tgt.ntokens_stats.json)

    predict programs on the test set using a trained OpenNMT model

    onmttranslate \ --model ${onmtmodelpt} \ --maxlength ${tgtmaxntokens} \ --src ${onmttextdatadir}/test.srctok \ --replaceunk \ --nbest ${nbest} \ --batchsize 8 \ --beamsize 10 \ --gpu 0 \ --reporttime \ --output ${onmttranslate_outdir}/test.nbest ```

  6. Compute the exact-match accuracy of the program predictions. ``` evaluationoutdir="output/evaluationoutput" mkdir -p "${evaluation_outdir}"

    create the prediction report

    python -m dataflow.onmthelpers.createonmtpredictionreport \ --dialoguesjsonl ${dataflowdialoguesdir}/test.dataflowdialogues.jsonl \ --datumidjsonl ${onmttextdatadir}/test.datumid \ --srctxt ${onmttextdatadir}/test.srctok \ --reftxt ${onmttextdatadir}/test.tgt \ --nbesttxt ${onmttranslateoutdir}/test.nbest \ --nbest ${nbest} \ --outbase ${evaluation_outdir}/test

    evaluate the predictions

    python -m dataflow.onmthelpers.evaluateonmtpredictions \ --predictionreporttsv ${evaluationoutdir}/test.predictionreport.tsv \ --scoresjson ${evaluation_outdir}/test.scores.json ```

  7. Evaluate the belief state predictions. ```bash beliefstatetrackerevaldir="output/beliefstatetrackereval" mkdir -p "${beliefstatetrackereval_dir}"

    creates the gold file from TRADE-preprocessed dialogues (after patch)

    python -m dataflow.multiwoz.createbeliefstatetrackerdata \ --tradedatafile ${patchedtradedialoguesdir}/testdials.json \ --beliefstatetrackerdatafile ${beliefstatetrackerevaldir}/test.beliefstatetracker_data.jsonl

    creates the hypo file from predicted programs

    python -m dataflow.multiwoz.executeprograms \ --dialoguesfile ${evaluationoutdir}/test.dataflowdialogues.jsonl \ --cheatingmode never \ --outbase ${beliefstatetrackereval_dir}/test.hypo

    python -m dataflow.multiwoz.createbeliefstatepredictionreport \ --inputdatafile ${beliefstatetrackerevaldir}/test.hypo.executionresults.jsonl \ --format dataflow \ --removenone \ --golddatafile ${beliefstatetrackerevaldir}/test.beliefstatetrackerdata.jsonl \ --outbase ${beliefstatetrackereval_dir}/test

    evaluates belief state predictions

    python -m dataflow.multiwoz.evaluatebeliefstatepredictions \ --predictionreportjsonl ${beliefstatetrackerevaldir}/test.predictionreport.jsonl \ --outbase ${beliefstatetrackerevaldir}/test ```

    • The scores are reported in
      ${belief_state_tracker_eval_dir}/test.scores.json
      .
  8. Calculate the statistical significance for two different experiments.

    bash
    analysis_outdir="output/analysis_output"
    mkdir -p "${analysis_outdir}"
    python -m dataflow.analysis.calculate_statistical_significance \
        --exp0_prediction_report_tsv ${exp0_evaluation_outdir}/test.prediction_report.tsv \
        --exp1_prediction_report_tsv ${exp1_evaluation_outdir}/test.prediction_report.tsv \
        --scores_json ${analysis_outdir}/exp0_vs_exp1.test.scores.json
    
    • The
      exp0_evaluation_outdir
      and
      exp1_evaluation_outdir
      are the
      belief_state_tracker_eval_dir
      in Step 7 for corresponding experiments.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.