Specializations > Computational Linguistics
What are "frame-based feature vectors"?
I'm taking a comp ling course on NLP and stuff and the prof was blubbering on nonstop about "frame-based feature vectors" -- but I have totally no clue at all one what these could mean. Can someone give an example suitable for a complete newbie? Thanks a lot!
This is (probably) referring to feature vectors that indicate the "subcategorization frame," or immediate dependents, of the verb. For example, in the sentence "I saw the cat," the verb frame would indicate that "I" was a dependent on the left of "saw" and "cat" was a dependent on the right side of "saw." The feature vectors may also encode the type of relationship (i.e. that "I" is a noun subject and "cat" is a noun object) and/or encode the part-of-speech of the dependents rather than the words themselves.
I should mention that this is an operationalization of "subcategorization frame" that is slightly different from what syntacticians mean, which is why people usually say "frame-based" or "verb frame" instead of "subcategorization frame." NLP applications that exploit verb frames usually have a relatively shallow notion of syntax that results in more than one verb frame for some subcategorization frames. The passive alternation is usually ignored, for example, and so a verb in a passive sentence would have a different verb frame from the same verb in an active sentence, while syntactic analyses typically abstract over the passive alternation and use the same subcategorization frame for both sentences.
You mean frame-based feature vectors in speech recognition, right? Once you've sampled your audio signal and it's been converted into a digital format (sampled, quantised etc.), you take frame samples of the signal window and convert each of the representations of the signal height at a time and put them into vectors based on a pre-determined set of features you've chosen to extract from the speech signal model and you use those feature vectors to provide you with the data that then goes to a language model in order to give you the best data from which to derive the phonemes.
It's the process I highlighted. People usually take frames of around ~10ms of each audio signal and put that data into a feature vector representing that time window (frame).
 Message IndexGo to full version