|dc.description.abstract||The goal of this project is to use supervised learning to train a model which will be able
to create logical forms, relying purely on shallow methods. For the supervised portion,
we used annotated data from the Redwoods Treebank as the source of the gold-standard
semantic representations. We then used the raw text of each sentence as input into
our shallow processing component and used the output to create our underspecified
We used the fully specified semantic representations to train a maximum entropy model
which then predicted which elements should be added to the underspecified representation.
This involved the creation of maximum entropy models for each of the conditions
in question, and then collating them to create (more) fully specified representations
from our underspecified ones.
The new representations were evaluated against the gold-standard Redwoods representations,
showing that the general principles here do provide reasonable results, though
there is much room for improvement and future work.||en