I'm not sure that "chain" is the right technical term, but I think the basic idea is reasonable. If we have a variable for everything that can be either a signifier or a signified (or both), we can indicate what the signs are by drawing a black square for each sign instance and then drawing lines between the square and the signified and signifier variables that participate in the sign instance. So for example we could have a string of "phoneme" variables, a string of "word" variables, and a string of "phrase" variables. To indicate that /kæt/ signifies the word "cat", we draw a black square between them, and draw a line from the square to the "cat" word variable, and from the square to the /k/ /æ/ /t/ phoneme variables. Similarly, we could have a "the" word variable, and an "NP" syntactic variable, and indicate that "the cat" signifies a noun phrase by putting another square between them and then drawing lines from each of "NP", "the", and "cat" to that square. So we have one black square for each sign, and it connects to variables that are part of either the signified or the signifier of that sign.

If we do this, we'll be drawing a

factor graph for the syntax and word segmentation of the phoneme string. Factor graphs are used in information theory to devise near-optimal codes, and are used in probability theory to define structured probability distributions over a given set of variables (in fact, those end up being the same thing). And there's no reason, in principle, not to include semantic, pragmatic, social, or other variables. And if you decide you want to explore some non-modular correspondence, such as an influence of social variables about interlocutor identity on syntactic ambiguity resolution, you can just draw a line between the relevant indexical variables and one or more "sign" squares whose signified is syntactic.

So probabilistic models of language structure embody your intuition that there are series of signs that "feed into" each other, and provide a natural framework for formulating and evaluating hypotheses about what the signs are. Of course, actually implementing and running these models takes a lot of effort, especially if you want to include lots of "cross-module" correspondences.