Let's see if we can get just a little deeper in this. I'm not so sure we actually disagree.
This is how they describe the fallacy. If system X and system Y are given the same input, and return the same output, and we don't know how both systems processed the information, then we know absolutely nothing about the internal workings or whether both system X and Y are identical or if they are built on fundamentally different workings.
Sounds like the Chinese Room scenario. The problem with that-- I actually find it laughable, I remember chuckling reading about it a year or two ago-- is that there could be something that would so perfectly mirror linguistic processing. It seems like an absurd scenario-- what if indeed pigs could fly. But, ignoring that, yes, I think the Chinese room is necessarily equivalent in a relevant way to linguistic processing. But it makes the assumption, which I hold as the problem, that it is possible for such a room to exist, even just in theory. That's where, I think, the thought experiment tricks people. (If such a room did exist, it would indeed be a speaking room. The shock of the thought experiment is due to the fact that such a room really couldn't ever exist, not that it's shocking that if it did we'd consider it to speak.)
And that's where AI has left us for now: the fact that a computer performs well on one task (for example chess) is evidence of
nothing at all. In fact, one of the only things we can be sure of is that it isn't doing that particular thing in any way remotely like how humans do it
or in a way similar to what a real AI would do, because it is an isolated system without the proper interaction with the rest of the more complex system that would be a full "brain". You can't isolate bits and pieces of human functionality and claim any bits and pieces of true AI. For example, I could make a computer that can open a door for me (actually I have often used garage door openers) but I wouldn't claim that it's intelligent. That would be absurd. I argue the same is true of a computer that can play chess and nothing else. Where this line gets blurry is when the whole brain-- chess, opening doors, speaking, writing Lord of the Rings fan fiction, debating religion on the internet, and wondering about the meaning of life-- then, and only then, should we reasonably ask whether that is a good model of the human brain. And the insight of the connectionists is just that. It doesn't mean the brain isn't a computer. It just means it isn't a randomly assembled mix of parts that do different tasks. There's no chess module in the brain, yet we can play chess. A computer that plays chess well is one that has a chess module. Not AI (not humanlike anyway). But one that using a connectionist model without special parts for chess and still plays it well? Now that's getting closer. And if it happens to fit well within a system that can do all of the other things, then that's where it starts to really make sense.
In short, I think the big problem is that there's some implication that performing a subset of tasks is somehow a part of the whole of performing all tasks and therefore a good representation of part of a brain. No programmer would take seriously the idea that part of a program is equivalent to part of a different program if they weren't using the same algorithm. They might accomplish the same tasks, but as parts they'd say nothing about each other. And that's where the metaphor falls flat. But, again, the whole brain? That's a potentially different story.
In the end, I still see nothing else than computation as what the brain does, in a very literal sense: it processes input and creates output. That process, in full, could be done by computer parts. And the whole would be equivalent to a brain. Therefore, the brain is equivalent to a computer.
I'm very open to an alternative, but such an alternative seems incoherent-- let me know if I'm missing something though.
Again, I think it's about input and output. A human brain plays chess in the context of the whole brain-- motivation, memories, visual input, a desire to win, rules of the game, strategies, etc. It doesn't seem inconceivable to isolate the relevant neurons and simulate the input they'd have in that situation, creating a "module", but that also seems silly, when it would be easier to just recreate the whole brain.
So...
1) The brain is not a set of legos.
2) The brain is dependent on function to the input it receives and judged functional by the output it gives.
Current AI:
1) Doesn't approach it as a whole (though connectionist models are toy examples that seem to do pretty well, suggesting future, larger models may actually work)
2) Doesn't tackle the input-output problem as far as I'm aware-- instead we do silly things like make a "language module" and have it's input and output be text in a chat box.
Can we model the brain using computers?
We can model anything, but to what end? Your plural "computers" suggestions we'd have several computers doing different things. So this suggests, as people currently do, that we can simulate brain-like things with computers. Sure.
But can we recreate in every relevant sense the brain as a computer? Not yet. Maybe in the future.
Can aspects of human cognition be informed by computational procedures?
I'd actually argue
no, because we need to have the whole context.
I remember a while ago getting in an argument with a professor of morphology because the morphological theory wasn't designed to fit in with current (or any) syntactic theories, and it didn't all fit together, so it was very confusing what the point was. I get the practicality of doing one part at a time, but it's a waste of time if there isn't a general intention to make everything fit together in the end.
Can neuroscience and computer science inform each other to teach us more about the brain and methods of artificially recreating human cognition?
Lots of metaphors!
In the case of something like language, I think computational methods are useful (potentially-- they still need some work, and better theories to work with from the theorists). But that's because language itself is a kind of symbol manipulation (what kind exactly is a different question, of course), not because doing so exactly imitates the brain, but rather because understanding the properties of the symbols (what the brain must do) and one way to manipulate them gives us a great metaphor for what the brain is doing. If it's possible to do, then we can think about how the brain may do it. At the moment, though, there's absolutely no evidence that speaking human languages is possible, except for the one particular detail that humans do it. If the problem were described in isolation I imagine a number of scientists/philosophers would throw their hands in the air and claim it can't be done. "Who could do such a thing as to speak English? It's just too complicated." But the paradox is that we linguists talk about understanding how to talk all the time, and we just aren't there yet. If we could get a computer that actually
uses language in the relevant ways (hard to figure out exactly what that means) we could really get somewhere understanding how to understand language.
I think we're just taking our own interpretations of what it means to equate two concepts and we have slightly different strictness levels on how we apply it, but I don't really see that we would fundamentally disagree, but I might be wrong.
I like Marr's three levels. Which level do you disagree with (existentially), or at what level (if you accept them) do you find this parallel breaks down?
For me:
1) computational: assumed to be the same in the brain and a computer
2) algorithmic:

3) implementation: different