Author Topic: The mind is not a computer  (Read 4512 times)

Offline Guijarro

  • Forum Regulars
  • Linguist
  • *
  • Posts: 97
  • Country: es
    • Spanish
    • Elucubraciones de José Luis Guijarro
The mind is not a computer
« on: December 30, 2014, 06:10:30 AM »
Would you debate on the possibility or impossibility for next Thursday to be the first day of 2015?

I get the same boring feeling from debates on topics like this, i.e.,

http://sociologicalimagination.org/archives/16468

A computer is OBVIOUSLY not a human mind. So what?

The question is, as Turing (I think) said not so much is a mind a machine but rather can machines help us understand some mind functions in a causal and materialistic way, and make us shun "spiritual" descriptions as sheer nonsense?

If Gödel's theorem may be applied metaphorically to our mental world (it seems doubtful to me, but oh... well!) a human mind will never understand the human mind completely and accurately, but this doesn't prevent computers from helping us in understanding and explaining some mental functions in a new way.

Or am I totally wrong?

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1576
  • Country: us
    • English
Re: The mind is not a computer
« Reply #1 on: December 30, 2014, 06:47:17 AM »
It depends on how you phrase the question, but there is a possibility that the human mind really is a particular type of computer. The reverse would not hold: few people would claim that a wristwatch with basic circuitry is a human mind. But to make the claim that the human mind computes using its organic hardware is not so unreasonable.
As one of my instructors explained it: anything knowable (computable) can be done on a computer; if it can't be known then we (as humans) can't know it either, we just make good guesses, and those guesses are themselves representations of knowable processes given the right heuristics and data.

What is missing from a computer is essentially embodiment and motivation, or what some might call "consciousness" or even a "soul"-- a machine is not human because it doesn't act like we do. But that says nothing about the computational processes in our minds.

I'm not entirely convinced by that position, but I can't find any conclusive way to deny it either.

The only tangible difference (aside from organization and motivation), which might be a crucial one, might be that the brain could be gradient rather than binary, with some neurons activated more than other neurons, rather than just a bunch of 1s and 0s floating around. But even that relies on certain physical/chemical processes that might be effectively reduced to 1s and 0s. I'm not sure...

I think the reason "Philosophers have long since given up on the notion that the mind could be understood as a computer" may be because we don't understand it well enough as a whole for any particular analysis to be effective. That doesn't mean that, in theory, the mind isn't a computer. It means that we don't yet understand it.

I don't have 30 minutes at the moment to watch all of that-- any parts in particular I should check out?
Welcome to Linguist Forum! If you have any questions, please ask.

Offline freknu

  • Forum Regulars
  • Serious Linguist
  • *
  • Posts: 397
  • Country: fi
    • Ostrobothnian (Norse)
Re: The mind is not a computer
« Reply #2 on: December 30, 2014, 06:53:02 AM »
*cough* Well, the human mind is a computer, just not the kind that we can build with semiconductors ;)

So I guess the more accurate question that people seem to ignore or be ignorant of is: can we emulate — or at least simulate to some degree — our biological computer using semiconductor computers?

I'm no expert in neurobiology or IT engineering — merging into some strange cybernetic neuroengineering or something — but aren't neural networks quite efficient and capable of simulating some functions of the brain?

Offline Guijarro

  • Forum Regulars
  • Linguist
  • *
  • Posts: 97
  • Country: es
    • Spanish
    • Elucubraciones de José Luis Guijarro
Re: The mind is not a computer
« Reply #3 on: December 30, 2014, 03:25:02 PM »
I think that on this issue, at least, we are all agreed.

This is a good way to start 2015 peacefully...


Offline jkpate

  • Forum Regulars
  • Linguist
  • *
  • Posts: 130
  • Country: us
    • American English
    • jkpate.net
Re: The mind is not a computer
« Reply #4 on: December 30, 2014, 04:48:52 PM »
If the human mind is not a computer, what else could it possibly be? Computation is the only game in town here. I'll see if I can download the video and watch it on my flight tomorrow.

The only tangible difference (aside from organization and motivation), which might be a crucial one, might be that the brain could be gradient rather than binary, with some neurons activated more than other neurons, rather than just a bunch of 1s and 0s floating around. But even that relies on certain physical/chemical processes that might be effectively reduced to 1s and 0s. I'm not sure...

The cool thing about neural networks is, given enough neurons and at least one hidden layer or recurrent connections, they can compute any Turing-computable function (Siegelmann 1991, Siegelmann and Sontag 1995). Indeed, if the connection weights are truly continuous, then neural nets are "hypercomputers" that can compute functions beyond the capacity of a universal Turing machine (Siegelmann and Sontag 1993, Siegelmann 1995, Stannett 2006, Ord 2006; although see Davis 2004 for a criticism of hypercomputation).

« Last Edit: December 30, 2014, 05:12:04 PM by jkpate »
All models are wrong, but some are useful - George E P Box

wisewill

  • Guest
Re: The mind is not a computer
« Reply #5 on: December 30, 2014, 05:58:07 PM »
“Is," "is," "is"—the idiocy of the word haunts me. If it were abolished, human thought might begin to make sense. I don't know what anything "is"; I only know how it seems to me at this moment.”

-Robert Anton Wilson

Offline jkpate

  • Forum Regulars
  • Linguist
  • *
  • Posts: 130
  • Country: us
    • American English
    • jkpate.net
Re: The mind is not a computer
« Reply #6 on: January 01, 2015, 01:28:44 AM »
I just watched it, thanks for the link. I thought Margaret Boden had a good point about the others bringing up additional questions for, rather than alternatives to, computation as a framework for understanding the mind. The specific example was intuition -- Hubert Dreyfus said that people don't make decisions by following rules but by using intuition, and Boden pointed out that intuition could proceed by a computational process. In general, the two panelists other than Boden appeared to adhere to a puzzling view of computation. At one point, Dreyfus seemed to say that a computer could perform in the same way as a human after all by simulating a neural network. However, as the Siegelmann and Sontag (1991) paper I linked to above showed, a traditional computer can simulate exactly a neural net whose weights are rational (ratios of integers), and such a neural net is in turn Turing-complete (and so could simulate exactly the computer in which it was contained). There are equivalencies between these different views of computation and so there is no in-principle difference of the kind Dreyfus needs.

Paul Dolan and Dreyfus both referred to the exploding state spaces that arise in any remotely realistic application, with Dolan in particular emphasizing the importance of context. Boden correctly pointed out that actual systems don't follow their cartoon image of systematically visiting each possibility. Two recent papers in NLP illustrate the point very nicely in a linguisticky setting. Lei et al (2014) addresses dependency parsing (and won best student paper at ACL 2014). Parikh et al (2014) addresses n-gram language modeling (and was runner-up for best paper at EMNLP 2014). In both papers, there is a relatively small space of basic features: for dependency parsing, the identity of the head word, dependent word, their suffixes, the grammatical dependency type (subject, object, etc.), and so on; and for n-gram language modeling, the identity of the current word, the identity of the previous word, the identity of the word before the previous word, and so on. While the space of basic features can be handled easily, the important information lies in combinations of these features (e.g. we need the plurality of the head verb suffix to match the plurality of its subject noun dependent suffix), and the space of all combinations of all the basic features is astronomically large. The portion of the space that is actually occupied by the data, however, is much smaller, because the features are correlated with each other. These papers show how tensor factorization techniques allow that small portion of the space to be identified and readily used.
« Last Edit: January 01, 2015, 01:56:28 AM by jkpate »
All models are wrong, but some are useful - George E P Box

Offline lx

  • Global Moderator
  • Linguist
  • *****
  • Posts: 164
Re: The mind is not a computer
« Reply #7 on: January 04, 2015, 08:52:25 AM »
Edited disclaimer: I haven't watched the original video.

It's totally pointless to have this discussion while such vastly different interpretations of the word computer can be assumed and be used to argue both sides of the point. Many people have a modern understanding of what a computer is and do apply that meaning to the analogy of a brain being a computer. Other people, however, can take a more etymological perspective and treat the word as signifying any entity that performs computation. People need to stick to one definition before embarking on an explanation of their argument.

Who can deny that the brain at some metaphysical level implements procedures that can be described as a sort of computation? Not very many, my instincts would lead me to believe. Therefore, a reductionist position would be that we've equated computation to the processes in the brain, therefore it must be some sort of computer (I see jkpate took this line of reasoning in an earlier post).

But once you factor in more encompassing and less abstract details of what makes a computer a computer in the modern day, it becomes easier and easier to argue the opposite point. Computers contain discrete subsystems while the brain has a lot more contemporary inter-dependency and no serious (modern) neuroscientist would opt for the modular approach (well, they're out there - but in the vast minority in my experience and usually are proponents based in a non-neuroscientific field where this modular approach helps them in their, let's say, psychological, philosophical or even linguistic frameworks). The brain has been equated to the modern technology of the day for as long as serious study of the brain/mind has had a potential candidate in the realm of technology. For example, Freud equated the brain with a steam-engine, telephone systems were equated with the brain before widespread adoption of computers and now the modern computer is the analogy of choice. I'd bet that in 50 years with whatever new quantum device we're all familiar with that this, instead, will be the technological counterpart to the human brain. If anything, I think the notion of the brain being like the internet is a better analogy to make, at least that way we can factor in disruptions and show how information can, in many cases, still find alternate paths to keep the information flowing. That is, unless the damage is that severe that it does actually block out whole ISP services and not just individual nodes.

If you apply the definition of computer the same way I do, to this argument, then hopefully I can convince you that at least the answer should be more of a maybe if not a full no.

Computers implement serial-processing and are restricted at the most fundamental level by the fact, while brains are fundamentally parallel-processors of information. Though speed might give the impression that computers can enact processing of information in a parallel way, that is just an extremely advanced implementation of shared streams of serial processing. In computers, information is fed upwards and processed by a higher-order device. Processor is king. In brains, neuron is king in the sense that the pattern of firing enacts the representation of information. Computers are fundamentally digital, while brains cross the analog/digital boundary via the electro-chemical ways that information is sent through the system (while computers are wholly electrical). There is an all-or-nothing representation in the binary system that underlies computers while the way the brain works is fundamentally probabilistic in how neurons fire and current affects the thresholds that determines if an action potential will fire.

The whole underlying structure of computers is incredible uniform in how information is sent through at the basic level, while the chemical structures and individual basic elements of the brain show much more complexity. There are hundreds of types of neurons and the giant concoction of neurotransmitters that send the information around the brain is just fundamentally more complicated than the basic elements of a computer (the so-called Von Neumann Architecture). Yes, this might be a physical difference I'm pointing out which many people would see as obvious, but that also highlights the fact that they would rely on a functional definition and what we know about equating functionality is that it can be done without any regard for inferring that X is the same as Y, though they both perform some functions at the same level. All computers are at the fundamental level is a way to implement fetching of information, apply predefined operations to said data and then store the result somewhere else.

We all know computers can perform calculations at speeds and levels of complexity that are beyond, in all senses of the term, anything that a human brain can perform. We know and accept this, yet this doesn't seem to strike a lot of people as an argument against the fundamentally different processes that both 'entities' exhibit. Brains are susceptible to all sorts of fallacies and tricks that we know computers are not susceptible to, that's actually a huge argument for using computers in many aspects of our society - precisely because they behave qualitatively different to human thought processes. Computers are designed to be so effective, ingenious inventions in modern engineering and brains carry our evolutionary heritage in multiple layers and highly-specialised functional areas that are massively parallel bottom-up processing units.

It's no secret that the processing of the brain influenced many disciplines such as AI or Machine Learning, but even with recurrent neural nets, the fundamentals are still wholly different and unworthy of any 'serious' analogy. The concepts are the same, naturally, because one was based on the other but you'd struggle to find anyone who was even slightly convinced that even the most pioneering deep-learning neural nets can in any massively convincing way be considered to be accurate representations of human learning. That being said, such learning mechanisms are promising and it's mainly due to the lack of understanding from the field of neuroscience that efforts in computing couldn't approximate neuronal behaviour even more closely, but here we're delving out of the the main topic and talking about replication of learning, while the initial question was set more in terms of an equivocation between the two concepts on a much deeper level. Even though such learning is promising, it's well known that computers are just awful when it comes to pattern recognition, as was discovered back in the 60's and yes a lot of progress has been made, but the design of computers hasn't changed since then, we've just changed how we apply higher-level processing which in theory shouldn't change what it fundamentally means to be a computer and to be a brain. There is no evidence whatsoever that the brain implements backprop the way neural nets do, and why should there be? It just further highlights that having a similarly-inspired architecture doesn't really bridge any fundamental gaps in the comparison of brains as computers.

We can do the exact same calculations that computers can do, but being able to approximate the same results says absolutely nothing about the similarity of the processes we used to arrive at that conclusion, something which I hinted at earlier in regard to equating functionality in two different systems. It just says nothing about the similarity and the calculation ability goes way beyond the limits of our brain, just as our brains go way beyond pattern recognition in anything that even the most state of the art computer can do now. 

As you can tell, I am very much in the "No" camp when it comes to the "Are brains like computers?" question. Then again, that's because my interpretation of the word computer is a lot more specific when applied to this question. If that definition was taken away and all I had was the line "Anything that can be thought of as exhibiting a computation (in any sense)" then I would have no choice but to be in the "Yes" camp.

Why do we seem destined in academia to constantly have debates that roar on where people don't see how two radically different definitions can completely change the fundamentals of a specific argument?  I think we'd save so much time if before detailing an argument we were forced to write out the definitions of the words we were using. Then, someone wouldn't need to read a whole paper in order to figure out that the author was using a considerably different definition of the specific concept under consideration.
« Last Edit: January 04, 2015, 09:01:16 AM by lx »

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1576
  • Country: us
    • English
Re: The mind is not a computer
« Reply #8 on: January 04, 2015, 10:33:05 AM »
I absolutely agree with you about definitions. They're one of my biggest pet peeves at the moment. (Asking whether a language "has" a "[feature]" or not is problematic, as is asking whether the "brain" "is" a "computer".)

But there's a little more to this than in your post. It's not just binary. Yes, the brain computes, and, no, my laptop is not a brain. But the intermediate position is the scientifically interesting one, where we imagine a machine roughly comparable to a modern computer and ask whether that machine could parallel the human brain in some relevant way.

There are some ways that a computer will (probably) never be like the brain: for example, the brain is powered by blood (in an oversimplified sense anyway), and a computer is powered by electricity from a power line. But those ways seem trivial.

Then the question is whether the neural circuits we have are doing computations. If they are, then it is not clear why, in theory, some computer (roughly like my laptop, but with some major differences, or potentially any implementation of a Turing machine) could not also do it.

This page describes some of this:
http://www.iep.utm.edu/compmind/
Quote
In particular, Marr argued that the complete explanation of a computational system should feature the following levels: (1) The computational level; (2) the level of representation and algorithm; and (3) the level of hardware implementation.
...
Marr, David. 1982. Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. New York: W. H. Freeman and Company.

Regarding (1), it seems very difficult to say that the mind does anything other than compute. Therefore, we could use a different algorithm in a different implementation and approximate the brain in function.
Regarding (2), we don't really know what algorithms are used by the brain, and we don't know how we'd implement those in particular with a computer. This is probably where the most work would need to be done in changing modern computers into something where human-like processing is really possible, if it is at all. An interesting question here is whether or not the brain uses an algorithm and one that could be copied on a computer. But it's again hard to imagine how not.
Then as for (3) it's clear that the implementation would be very different. Can metal implement the same properties as neurons? If so, then the algorithm and computation could be equivalent while using a different implementation. Everything about computational theory suggests it doesn't matter what the medium is-- it would be a very interesting result to find that there's something special about neurons, but I think that's unlikely.

A major problem with actually making any of this work is, I assume, simplicity: there's no reason to assume that our modern computers can implement the mind in an easy way or that the mind is simple. It isn't that the mind is running some basic algorithm we can easily copy in a few lines of code. This is the most likely reason why current attempts have failed. That does not mean that in theory such a computer could not be built nor that the mind isn't a computer.


So back to the question: "Is the mind a computer?" -- I choose to interpret this in the only interesting way, which is to ask whether the in-out processing of the brain words in a way that is equivalent to types of logic used in computing. Could all of the behavior of the mind be isomorphically captured in 1s and 0s (or scalar values of the same sort)? If so, it is a computer, and that is interesting.

It's ridiculous to think that we fully understand it or that your computer is a brain, and in my opinion things like modularity fall in this. But we're also far from ruling out the possibility that the brain is a computer, in the logical sense.

In an earlier thread I raised the question of whether something like embodiment and mortality would be necessary to implement something isomorphic to the human brain: could a computer that exists only in circuitry and has no sense of life or death ever function in a way that is equivalent to a human? Could we (as seen in recent movies) upload ourselves into the internet?

Personally I think the bigger problem for implementing a computer isomorphic to the human brain is the input-output systems than the brain itself. I see no reason we couldn't make an exact replica of a human brain including circuits equivalent to neurons, with the exact connections and numbers of neurons as in that brain.* But like a dead body, it wouldn't be alive until it was "plugged in" so to speak, and that's where the hypothetical begins to fall apart. What input would you give a computer to test whether it is a human? That seems like an absurd question, and it is. And what output would you expect it to make?



*Even if they don't themselves represent 1s and 0s, given that neurons are made of physical components there must be some way to simulate their analog functions to a reasonable degree-- would 100 decimal places be enough? Or 1,000,000? At some point surely something that small in our brains would end up with statistical error. So we could simulate it with a computer. It would be, potentially, overly complex and perhaps there's some easier way to build one, but in short, I don't see why this is theoretically impossible. A remaining question, and one that intrigues me a lot, is why the brain is so efficient, as I'm intrigued by how we're so good at speaking languages-- it's not just that we can manage but that we do so almost effortlessly. Therefore whatever complicated grammatical theories we come up with seem off because they don't seem simple at all. Our brains implement all of that with ease, and that's another topic entirely. But in terms of whether the brain is a computer, I say why not, if we could potentially duplicate it using a computer. And I see no reason why we could not do so given enough time and raw materials and a detailed map of a brain.

Therefore, I think this all circles back to the question of algorithm: that's what we should be trying to understand. Is there an algorithm for the mind? And if so, what is it? And as a bonus, what's the input and output for it?

I think that's fairly similar to what you were getting to here, lx:
Quote
We can do the exact same calculations that computers can do, but being able to approximate the same results says absolutely nothing about the similarity of the processes we used to arrive at that conclusion, something which I hinted at earlier in regard to equating functionality in two different systems. It just says nothing about the similarity and the calculation ability goes way beyond the limits of our brain, just as our brains go way beyond pattern recognition in anything that even the most state of the art computer can do now.

For the record this is also one reason I'm puzzled with the fascination syntacticians have with Competence. Even if there is in fact a way to completely separate competence and performance, the real mystery is how we implement competence (that is, through performance). Maybe linguists are right to wait on this one because it's a hard one, though. And the thing is, grammatical theory actually is bound by performance to some degree: a completely reasonable competence model is one that is an infinite list (set) of sentences for a given language. That's mathematically equivalent to that language. But it doesn't work in the brain. So the question then again becomes: how do we implement language, whatever its properties may be?
« Last Edit: January 04, 2015, 10:37:42 AM by djr33 »
Welcome to Linguist Forum! If you have any questions, please ask.

Offline lx

  • Global Moderator
  • Linguist
  • *****
  • Posts: 164
Re: The mind is not a computer
« Reply #9 on: January 04, 2015, 11:11:34 AM »
The Dartmouth Conferences were a huge turning-point in the field of AI and it's there that a division arose in the way people made their fundamental interpretations of intelligent machines. The two divisions were GOFAI (Good old-fashioned Artificial Intelligence) and Connectionism. The first group of people split off and dealt with issues to do with symbol manipulation and computation proving of theorems, geometric proofs and propositional logic / reasoning and so on and so on. They didn't have much regard for trying to mirror the processes of the brain as they took a more goal-orientated approach. The Connectionists, on the other hand, believe that the only intelligent machine is the brain and therefore the studies into computational intelligence and implementations of machines should be guided by and help inform the study of the brain. In contrast to the goal-orientated approach, you can think of these guys as more journey-orientated (though of course the goal still matters)).

Before these two groups split off, there wasn't much of an appreciation of the fundamental difference between both approaches and some saw the fact that machines could do things that humans could also do as an indication of being a good model of the human mind. Russell & Norvig's book on Artificial Intelligence mentions this in the opening pages:
Quote
In the early days of AI there was often confusion between the approaches: an author would argue that an algorithm performs well on a task and that it is therefore a good model of human performance, or vice-versa. Modern authors separate the two kinds of claims: this distinction has allowed both AI and cognitive science to develop more rapidly.
This is how they describe the fallacy. If system X and system Y are given the same input, and return the same output, and we don't know how both systems processed the information, then we know absolutely nothing about the internal workings or whether both system X and Y are identical or if they are built on fundamentally different workings. Just like the field of AI in its early days, people used to equate the fact that a system had good results with the fact that it therefore must be a good model of human performance. The trouble is, without looking at the steps in between, such a system tells us absolutely nothing about human performance, only that the output is the same.

I can perform basic calculations in my head to basic multiplication problems, but calculators would do the process in a different way, yet we get the same results. A calculator is not a good model of human performance. We do, however, get the same results. Having made that point, I wanted to reference some of the things you said in your response, Daniel:
Quote
ask whether that machine could parallel the human brain in some relevant way.
Quote
If they are, then it is not clear why, in theory, some computer (roughly like my laptop, but with some major differences, or potentially any implementation of a Turing machine) could not also do it.
Quote
Therefore, we could use a different algorithm in a different implementation and approximate the brain in function.
Quote
An interesting question here is whether or not the brain uses an algorithm and one that could be copied on a computer.
Quote
If so, then the algorithm and computation could be equivalent while using a different implementation
Quote
I choose to interpret this in the only interesting way, which is to ask whether the in-out processing of the brain words in a way that is equivalent to types of logic used in computing. Could all of the behavior of the mind be isomorphically captured in 1s and 0s (or scalar values of the same sort)? If so, it is a computer, and that is interesting.
This is where I think we get to the heart of the disagreement (and it's nice when it comes to an interpretational difference rather than one based on something much deeper) as I can totally understand and appreciate everything you said, it's just that what you're pointing to links strongly to artificially recreating brain processes and my belief is that artificial recreation is itself an argument against the premise of the brain being a computer. Replication, capture, parallel, copy and approximate as much as possible, but the fact that such things can be done provides no sort of evidence for the opinion that the brain is a computer in my opinion.

Those are answers to the questions:

Can we model the brain using computers?
Can aspects of human cognition be informed by computational procedures?
Can neuroscience and computer science inform each other to teach us more about the brain and methods of artificially recreating human cognition?

If those were the questions being asked, I would have a 60ft sign with "YES" in huge, red capital letters. I just don't get the sense that when this question is asked, this is what people mean. I get the impression people mean on a much more fundamental level are these two systems the same. For me it's a categorical no, based on how I apply my definition to the problem, i.e. I define a system by how it goes about getting an answer, it's fundamental physical parts AND the result it returns for a given input. If X and Y giving the same output on the same input regardless of how the answer was worked out is someone's definition of "X is Y" then I can't argue with that. All I can say is that for me, "X is Y" requires a lot more fundamental similarities.

I think we're just taking our own interpretations of what it means to equate two concepts and we have slightly different strictness levels on how we apply it, but I don't really see that we would fundamentally disagree, but I might be wrong.
« Last Edit: January 04, 2015, 11:23:59 AM by lx »

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1576
  • Country: us
    • English
Re: The mind is not a computer
« Reply #10 on: January 04, 2015, 01:43:48 PM »
Let's see if we can get just a little deeper in this. I'm not so sure we actually disagree.

Quote
This is how they describe the fallacy. If system X and system Y are given the same input, and return the same output, and we don't know how both systems processed the information, then we know absolutely nothing about the internal workings or whether both system X and Y are identical or if they are built on fundamentally different workings.
Sounds like the Chinese Room scenario. The problem with that-- I actually find it laughable, I remember chuckling reading about it a year or two ago-- is that there could be something that would so perfectly mirror linguistic processing. It seems like an absurd scenario-- what if indeed pigs could fly. But, ignoring that, yes, I think the Chinese room is necessarily equivalent in a relevant way to linguistic processing. But it makes the assumption, which I hold as the problem, that it is possible for such a room to exist, even just in theory. That's where, I think, the thought experiment tricks people. (If such a room did exist, it would indeed be a speaking room. The shock of the thought experiment is due to the fact that such a room really couldn't ever exist, not that it's shocking that if it did we'd consider it to speak.)

And that's where AI has left us for now: the fact that a computer performs well on one task (for example chess) is evidence of nothing at all. In fact, one of the only things we can be sure of is that it isn't doing that particular thing in any way remotely like how humans do it or in a way similar to what a real AI would do, because it is an isolated system without the proper interaction with the rest of the more complex system that would be a full "brain". You can't isolate bits and pieces of human functionality and claim any bits and pieces of true AI. For example, I could make a computer that can open a door for me (actually I have often used garage door openers) but I wouldn't claim that it's intelligent. That would be absurd. I argue the same is true of a computer that can play chess and nothing else. Where this line gets blurry is when the whole brain-- chess, opening doors, speaking, writing Lord of the Rings fan fiction, debating religion on the internet, and wondering about the meaning of life-- then, and only then, should we reasonably ask whether that is a good model of the human brain. And the insight of the connectionists is just that. It doesn't mean the brain isn't a computer. It just means it isn't a randomly assembled mix of parts that do different tasks. There's no chess module in the brain, yet we can play chess. A computer that plays chess well is one that has a chess module. Not AI (not humanlike anyway). But one that using a connectionist model without special parts for chess and still plays it well? Now that's getting closer. And if it happens to fit well within a system that can do all of the other things, then that's where it starts to really make sense.

In short, I think the big problem is that there's some implication that performing a subset of tasks is somehow a part of the whole of performing all tasks and therefore a good representation of part of a brain. No programmer would take seriously the idea that part of a program is equivalent to part of a different program if they weren't using the same algorithm. They might accomplish the same tasks, but as parts they'd say nothing about each other. And that's where the metaphor falls flat. But, again, the whole brain? That's a potentially different story.

In the end, I still see nothing else than computation as what the brain does, in a very literal sense: it processes input and creates output. That process, in full, could be done by computer parts. And the whole would be equivalent to a brain. Therefore, the brain is equivalent to a computer.
I'm very open to an alternative, but such an alternative seems incoherent-- let me know if I'm missing something though.


Again, I think it's about input and output. A human brain plays chess in the context of the whole brain-- motivation, memories, visual input, a desire to win, rules of the game, strategies, etc. It doesn't seem inconceivable to isolate the relevant neurons and simulate the input they'd have in that situation, creating a "module", but that also seems silly, when it would be easier to just recreate the whole brain.

So...
1) The brain is not a set of legos.
2) The brain is dependent on function to the input it receives and judged functional by the output it gives.

Current AI:
1) Doesn't approach it as a whole (though connectionist models are toy examples that seem to do pretty well, suggesting future, larger models may actually work)
2) Doesn't tackle the input-output problem as far as I'm aware-- instead we do silly things like make a "language module" and have it's input and output be text in a chat box.


Quote
Can we model the brain using computers?
We can model anything, but to what end? Your plural "computers" suggestions we'd have several computers doing different things. So this suggests, as people currently do, that we can simulate brain-like things with computers. Sure.
But can we recreate in every relevant sense the brain as a computer? Not yet. Maybe in the future.
Quote
Can aspects of human cognition be informed by computational procedures?
I'd actually argue no, because we need to have the whole context.
I remember a while ago getting in an argument with a professor of morphology because the morphological theory wasn't designed to fit in with current (or any) syntactic theories, and it didn't all fit together, so it was very confusing what the point was. I get the practicality of doing one part at a time, but it's a waste of time if there isn't a general intention to make everything fit together in the end.
Quote
Can neuroscience and computer science inform each other to teach us more about the brain and methods of artificially recreating human cognition?
Lots of metaphors!

In the case of something like language, I think computational methods are useful (potentially-- they still need some work, and better theories to work with from the theorists). But that's because language itself is a kind of symbol manipulation (what kind exactly is a different question, of course), not because doing so exactly imitates the brain, but rather because understanding the properties of the symbols (what the brain must do) and one way to manipulate them gives us a great metaphor for what the brain is doing. If it's possible to do, then we can think about how the brain may do it. At the moment, though, there's absolutely no evidence that speaking human languages is possible, except for the one particular detail that humans do it. If the problem were described in isolation I imagine a number of scientists/philosophers would throw their hands in the air and claim it can't be done. "Who could do such a thing as to speak English? It's just too complicated." But the paradox is that we linguists talk about understanding how to talk all the time, and we just aren't there yet. If we could get a computer that actually uses language in the relevant ways (hard to figure out exactly what that means) we could really get somewhere understanding how to understand language.


Quote
I think we're just taking our own interpretations of what it means to equate two concepts and we have slightly different strictness levels on how we apply it, but I don't really see that we would fundamentally disagree, but I might be wrong.
I like Marr's three levels. Which level do you disagree with (existentially), or at what level (if you accept them) do you find this parallel breaks down?

For me:
1) computational: assumed to be the same in the brain and a computer
2) algorithmic: ???
3) implementation: different
Welcome to Linguist Forum! If you have any questions, please ask.

Offline jkpate

  • Forum Regulars
  • Linguist
  • *
  • Posts: 130
  • Country: us
    • American English
    • jkpate.net
Re: The mind is not a computer
« Reply #11 on: January 04, 2015, 06:09:06 PM »
The whole underlying structure of computers is incredible uniform in how information is sent through at the basic level, while the chemical structures and individual basic elements of the brain show much more complexity. There are hundreds of types of neurons and the giant concoction of neurotransmitters that send the information around the brain is just fundamentally more complicated than the basic elements of a computer (the so-called Von Neumann Architecture). Yes, this might be a physical difference I'm pointing out which many people would see as obvious, but that also highlights the fact that they would rely on a functional definition and what we know about equating functionality is that it can be done without any regard for inferring that X is the same as Y, though they both perform some functions at the same level. All computers are at the fundamental level is a way to implement fetching of information, apply predefined operations to said data and then store the result somewhere else.

While you are quite right that it is obvious that the brain is not a von Neumann machine, it seems strange to me to limit our definition of "computer" to those machines that happen to be cheap and easy to manufacture with 20th-century material science. What is the motivation for choosing a definition that is on its face so arbitrary?

It's no secret that the processing of the brain influenced many disciplines such as AI or Machine Learning, but even with recurrent neural nets, the fundamentals are still wholly different and unworthy of any 'serious' analogy. The concepts are the same, naturally, because one was based on the other but you'd struggle to find anyone who was even slightly convinced that even the most pioneering deep-learning neural nets can in any massively convincing way be considered to be accurate representations of human learning. That being said, such learning mechanisms are promising and it's mainly due to the lack of understanding from the field of neuroscience that efforts in computing couldn't approximate neuronal behaviour even more closely, but here we're delving out of the the main topic and talking about replication of learning, while the initial question was set more in terms of an equivocation between the two concepts on a much deeper level. Even though such learning is promising, it's well known that computers are just awful when it comes to pattern recognition, as was discovered back in the 60's and yes a lot of progress has been made, but the design of computers hasn't changed since then, we've just changed how we apply higher-level processing which in theory shouldn't change what it fundamentally means to be a computer and to be a brain. There is no evidence whatsoever that the brain implements backprop the way neural nets do, and why should there be? It just further highlights that having a similarly-inspired architecture doesn't really bridge any fundamental gaps in the comparison of brains as computers.

Ok, first, we need to observe that "neural nets" and "connectionist models" are really sociological categories rather than natural computational classes. Kohonen self-organizing maps, multilayer perceptrons, and recurrent nets are all introduced under the "connectionist" or "neural network" umbrella, but are algorithmically and computationally quite different -- I can't see a mathematical class that includes these three classes of models but does not include markov random fields, for example, even though MRFs are not generally considered connectionist.

Having made this observation, we need to take care to distinguish the two issues you bring up in your paragraph. First, we should consider the representational capacity of Turing machines and neural nets: the set of functions they can compute, for some rule table, or for some set of connection weights, respectively. As the papers I've provided show, recurrent nets and Turing machines can compute the same set of functions: for some rule table, there will be a set of connection weights that produce the same output set, and vice versa. Some of the comments so far have worried over the difference between continuous and discrete weights. However, the Siegelmann and Sontag (1991) paper only assumes rational weights, and a Turing machine could easily work with rational weights by representing the integer numerators and denominators exactly.

The second issue that you bring up is one of learning: how do we obtain the set of connection weights or the rule table in the first place?  I agree that this is a much harder and more interesting problem. However, backpropagation is not the only story for training neural net-like models. Broadly, the strategy in machine learning has been to recast learning problems as optimization problems, where the objective function is some trade-off between an error-like measure of predicting the observed data and a prior. Learning takes place by minimizing this error-like measure (which may be the difference between the predicted value and the observed value, or the negative log probability of the observed data) and the distance from the prior. To do this, in turn, we need to take the gradient (a multidimensional derivative) of the objective function (the error and the prior) with respect to the parameters, which in this case are connection weights, and then adjust the parameters in the direction opposite the gradient (i.e. "down hill"). The gradient pretty much always involves a difference between the predicted value and the observed value.

Backpropagation tries to squeeze as much as possible out of every data point by updating all the weights in the network with each comparison, but it's not the only approach to training neural nets. Modern neural nets are largely the same as the nets from the 1980's, except they are bigger, and most or all of the training happens in a phase called "pre-training." In pre-training, each layer is trained in isolation to reproduce the patterns of outputs of the previous layer (Hinton and Salakhutdinov 2006). This training scheme relies on a cheap, local approximation to the true gradient, and nowadays backpropagation is considered to be a "fine-tuning" of this learning. There are a variety of other cheap and local approximations to the true gradient as well. While backpropagation is biologically implausible, approximations to its object of computation (objective function gradients with respect to connection weights) may not be.
« Last Edit: January 04, 2015, 06:28:40 PM by jkpate »
All models are wrong, but some are useful - George E P Box

Offline Guijarro

  • Forum Regulars
  • Linguist
  • *
  • Posts: 97
  • Country: es
    • Spanish
    • Elucubraciones de José Luis Guijarro
Re: The mind is not a computer
« Reply #12 on: January 06, 2015, 04:33:44 AM »
When facing a debate such as this one, I am always reminded of another question frequently asked by linguists. Do all language have a grammar?

Grammar is the human linguist way to describe a certain set of mental representations. Maybe a Martian would describe it along a totally different frame which could not be called "grammar".

The same with the question in this thread. Until Turing appeared and found out a way to simulate brain functions, all our mental achievements were described (?) by "spiritual" metaphors. The mental was included in HUMANITIES, and this meant that it did not (and could not, by its very nature) belong to the SCIENCES.

Turing started a new way to look at these matters. A way which has had a tremendous importance in the development of modern world. Granted, he could not have been right 100%, but his shortcomings have become understandable and may some day be overcome. Research in these topics is still in its infancy and a lot of homework has to be done to clear away some misunderstandings and shortcomings.

However, no serious study of the mind can be carried out to day if the mind is not conceived as performing computations. Complex computations, it is true; computations which are difficult to imagine today with our present knowledge. But computations all the same.

It is nonsensical to compare human minds with present-day computers, although the materializing description of some mental functions may help us in looking ahead for more problems to solve and better solutions to achieve.

So, I really think that actually, in this thread, we are all agreed. As grammar is our way to describe languages, when we talk about mind, nowadays, we are talking about its computations.

And, believe it or not, computations is what computers are supposed to do.

(I am sorry for my pedestrian contribution to this thread. I am not a good AI researcher, as some of you seem to be! I do hope, however, I make some sense, in spite of it)

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1576
  • Country: us
    • English
Re: The mind is not a computer
« Reply #13 on: January 06, 2015, 08:09:13 AM »
I think an interesting point to be made is that the parallels between computer science and Psychology may be more convincing than those between Computer Science and Linguistics. The reason is that it is much easier to assert an overly abstract and simplistic model for language processing because we see the data as some sort of code. If we turn to a topic like emotions, though, finding parallels in computation is much more difficult, and the rest is probably a better thought out and more accurate version of how "emotions" are "computed". All of the problems with the comparisons are due to, as I've said above, attempting to overly simplify the problem and solve it with a simple program. But if we take the opposite extreme-- the human mind is a very very complex program, it's much harder to argue against that.

You said it well, Guijarro, here:
Quote
It is nonsensical to compare human minds with present-day computers...
...or to think that we might be able to "solve" this at the moment.
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Guijarro

  • Forum Regulars
  • Linguist
  • *
  • Posts: 97
  • Country: es
    • Spanish
    • Elucubraciones de José Luis Guijarro
Re: The mind is not a computer
« Reply #14 on: January 06, 2015, 10:52:28 AM »
I think I read Steven Pinker somewhere giving a materialistic view of emotions as the way devices are programmed to handle difficult or unknown situations. It was not a formalised account of emotions, just a kid of blue-print to signal one of the possible ways to handle this ever-recurring so-called dead alley in computation theory. It was probably in this book of his: http://stevenpinker.com/publications/how-mind-works

This is my point. Instead of repeating endlessly that emotions are not computable, let us say that they are not computable YET. Ideas such as those of Pinker may open up a way to solve this apparent "impossibility". Now we do not understand what emotions are, so we cannot design working computations about them. But does that mean that emotions are  never going to be able to be understood in computational terms?

My problem lies with the metaphorical application of Gödel's theorem to the human mind system. If we may use that theorem which, I believe (I am not a mathematician!) was meant for man-made systems, for our understanding of parts of nature, then, if I am rendering Gödel's idea right, a human mind may never ever understand the human mind completely. But that does not prevent it to understand some of the mind functions. So, as we will never have a perfect knowledge of our minds, we may indulge in this filed forever, finding new and exciting ways to describe mental functions.

A magnificent field open to present and future (!!) researchers, that will never exhaust itself!