I'm not aware of any, and I wouldn't trust it if there was. Counting "words" is an incredibly complicated topic, because it's not clear how to define them. Chinese and similar isolating languages (with few or no affixes) probably have the fewest words, but they still combine to form complex phrases. English is somewhere in the middle. At the other extreme, there are languages with thousands of word forms (even hundreds of thousands for languages with many possibly combining affixes), and even some languages which have recursive (repeating) morphology and therefore technically infinitely many word forms.
This means "how much vocabulary someone knows" is a tricky question for that reason, even in a single language, and much more so across languages when the rules of the game are different in the different languages. There's very little difference between Chinese, with many small words, including many grammatical particles, and a language with highly complex morphology with very long words even forming a whole sentence. In the end, on average, the parts are often just about the same, whether or not they make up "words".
Furthmore, studies or statistics purportedly measuring vocabulary size are almost always hopelessly biased toward "dictionary words", the sort of thing that would be found on an academic test. And that's simply not a representative way to measure how many words we know, because we know so many other things-- technical jargon and slang are really the same thing, just in different domains; and also consider how many proper names (people, places, brands, etc.) we know. And those things just aren't on the test. A reformulation of your question that might make sense would be to ask whether someone who knows more specifically relevant technical jargon is more productive in the workplace (for example, how many words to they know related to photocopiers?), but that seems superficial to me, at best. Regardless, anyone who needs a word for something either learns it (it's easy to learn a specific word or a specific set of words through usage, regardless of overall vocabulary size), or makes up a word or expression in place of it. We have many such tentative circumlocutions that we use every day when we don't have exact terms for things, and sometimes they become the standard term later anyway.
More abstractly, it is unclear whether we can even
define "word" cross-linguistically in any meaningful way. See Haspelmath 2011:
https://doi.org/10.1515/flin.2011.002 (draft:
https://www.eva.mpg.de/fileadmin/content_files/staff/haspelmt/pdf/WordSegmentation.pdf)
To get around this, you might ask something else, like how many memorized units (whether independent words or affixes) a language has. But that gets into another problem: there are many good arguments for treating some phrases as memorized units too (idioms, as well as the foundational arguments for Construction Grammar). So that doesn't really solve it either.
There may be a useful way to operationalize this question, but it isn't very meaningful on the surface. And even if you found a correlation, there are more complex issues about ideas of linguistic complexity here: is it more efficient to have fewer forms, or to have more forms? It's a trade-off between speaker and listener, being more explicit or not. Complexity is complex, and a controversial topic at this time. However, you might be interested in some of the recent literature about sociolinguistic correlates of linguistic complexity, such as the idea that an influence of having many adult second language learners (as a result of intense contact) tends to simplify languages, so relatively small, relatively isolated languages tend to have more "complex" grammar, by some definition/metric of "complex". McWhorter, Trudgill and others have discussed these issues in detail.