Most approaches to intelligent, speech-enabled devices today in the research world involve training neural networks on prohibitively large amounts of examples, with little direct control over what is actually learned. On the other hand, this approach often cannot be applied in real-world settings where the required amount of examples does not yet exist, or the expectations of the system's performance are so high that its mistakes must somehow be indirectly corrected. Instead, in real applications, very inflexible, application-specific grammars are often used.