Generally agree, but one distinction is that a human in a room is a prisoner, whereas a brain in a skull is not. Makes some kind of difference, surely?
The human in the room is not like a person (or brain) - except himself. The human in the room is not analogous to the human AND the room. This is a core point.
The human AND the room is like a person. In this particular case, the human-room system thinks it's the human, so in that sense it is a prisoner. But it might as well have been a free human, if Searle + room simulated Searle in some previous state before he entered the room.
So I'm not sure what you're getting at. Maybe I'm misunderstanding you? Anyway, thanks!
Thanks for the response! Mechanistically I think you are right but the fact that a human in a room would not like this state of affairs indefinitely, whereas a brain in a skull would, means that the human in the room is actually quite different than another person.
This gets at the distinction between explaining things and predicting things. Instrumentalism vs theory if you like.
It's perhaps a tangent on your point, but the pedant in me had to say this! I've thought about the Chinese room experiment in the past :)
Again, I'm not comparing the human in the room to a brain. I'm comparing the human-room system to a human. The human-room system is similar to a person, the human in the room is not.
A brain cannot either like or dislike sitting in a skull. Brains don't have preferences about that. Only individuals have preferenses.
Yeah maybe it's tangental and maybe I'm still misreading you. Anyway, thanks!
Hi Mark. I don’t think Searle truly grasped the magic of functional computationalism. Therefore I don’t think his Chinese room thought experiment was designed well enough to truly hit home. I’d love for you to consider an argument of my own however:
When your thumb gets whacked, according to modern neuroscience information gets neurally sent to your brain about this event. Furthermore it’s thought that your brain does various things with this information to ultimately cause you to feel thumb pain. The question is, what does it do?
Functional computationalists maintain that your brain properly processes it into new information, and it’s this processing from the first information to the second that creates what you thus feel. Therefore if there were marks on paper that were correlated well enough with the information that your whacked thumb sends your brain, and it were scanned into a computer that processes it to then print out paper with marks on it that correlate well enough with the information that resulted from your actual brain’s processing, then something in this paper to paper conversion would thus feel what you do when your thumb gets whacked.
So many believers have told me that there’s nothing silly about this reductio that I don’t rest my case yet. The reason that I consider the inspiring position magical, is because actual computers do something more than just process information. Instead their processed information goes on to inform causally appropriate instruments like computer screens, memory banks, and so on. Therefore I’m not saying that functional computationalism is entirely false, but rather that it stops one step short of full causality.
At this point we can ask what processed brain information might inform to thus exist as an experiencer of thumb pain? This question puzzled me for several years, that is until I came across Johnjoe McFadden’s electromagnetic consciousness proposal. There are so many questions that this explanation makes sense of that I think future people will smile about how long humanity remained confused here.
Hi Eric, and thank you! Your name is very Swedish sounding, if I may say so (although Erik is a more common spelling). Borg is a common name here, and it means fortress more or less.
"Furthermore it’s thought that your brain does various things with this information to ultimately cause you to feel thumb pain. The question is, what does it do?"
I disagree entirely. I don't think neuroscience is telling us that. Computational functionalism certainly isn't saying that. There may be variants that do, but not as I understand it.
The mistake you're making here, according to us, is that you're viewing pain as a phenomena that "ultimately" comes online by some means. If you think about it like that, of course it's going to puzzle you. There is no "step" when the pain "happens. That would certainly be magical.
You're conceptualising pain as a mental object that you, as a mental subject, is observing, or experiencing. That's a natural way to conceptualise it. Just as we model ourselves as subjects in a physical environment, we also model ourselves as mental subjects in an environment of mental thoughts, feelings, pain, hunger, etc. We're "objectifing" them. This is highly useful for cognition and communication. Talking about the realness and depth of one's love is a lot more succesful for procreation, than talking about how ones love is an illusion, not really love. That hasn't been a path to success. This way of modeling ourselves and our inner life is, however, no good for doing philosophy of mind.
The crucial mistake is to think that neuroscience must explain where or when the pain happens. That has not succeeded, nor will it ever. The pain does not live anywhere in the brain, nor does it live in a computation as a specific, well definable part of that computation. It is not a mental object, nor is it a well defined computational object.
What neuroscience must explain is how we model ourselves so as to behave as if we are in pain. That includes all behaviour, from avoidance to screaming to speech acts to introspective reports to deep philosophical analysis. THAT is what needs explaining, and then we are done. There is no extra pain-ness to explain. There's no "essence", no matter how much our self-modeling makes us insist that there is. It is the insisting we need to explain! This is problem for neuropsychology, neuroscience and evolutionary psychology to explain. Not physicists or armchair philosophers.
Try to conceive of pain that does not make you scream, cringe, say that you are in pain, inpinges on your thought process. It has intrinsic awful pain-ness to it, but it is not doing anything. You sure that makes sense?
I'll also point out that it is not a coincidence that redness and pain comes up all the time in these qualia discussions. It's because we've evolved to react strongly to these stimuli. Almost nobody seems mystified by the greyness of grey or the feel of a barely noticable touch. If you're more mystified with redness than greyness, that is because of what red stimuli DOES to you, it's functionality.
McFadden seems to me to 1. propose a solution to a problem that does not exist and 2. not have a theory of consciousness at all.
I'm not saying electromagnetic fields are irrelevant to the brains workings. I'm no physicist, but it seems to me electromagnetic fields are everywhere, and that they are crucial in the behaviour of ions fluxing in and out of neurons. But that's just physical mechanics. To me it doesn't matter if he's right that electromagnetic fields are significant in ways that others have not anticipated. That still doesn't bring us any closer to explaining consciousness in some non-computational non-functional way.
If his theory is correct, and it is something over and above what ordinary physics predicts, then it is in principle possible to detect that ordinary physics is being violated in brains. In such a case, i still see no proposal of what consciousness actually IS, and how it gets translated into information processing and expressed in language.
If his theory, on the other hand, is to be interpreted merely as a proposal of what kind of physics-compliant phenomena are playing out in the brain, then that fits neatly into computational functionalism. No sensible functionalist claims that we have the complete picture of all the physical, biochemical, neuroelctric mechanisms in brains! We're discovering more every day. But none of it implies that functionalism is wrong.
"Functional computationalists maintain that your brain properly processes it into new information, and it’s this processing from the first information to the second that creates what you thus feel."
Again, no that is not what we're claiming.
"Therefore if there were marks on paper that were correlated well enough with the information that your whacked thumb sends your brain, and it were scanned into a computer that processes it to then print out paper with marks on it that correlate well enough with the information that resulted from your actual brain’s processing, then something in this paper to paper conversion would thus feel what you do when your thumb gets whacked."
I'm not sure I'm following you. First of all, I don't see how adding electromagnetic fields is any less magical. That sure doesn't explain how this quale pops into existence?
Second, as stated, I don't think pain is a thing in the way you envision it. You're not using the functionalist view on what pain is when you're rejecting it.
If your computer contains an accurate model of the whole physical system of my whole body and can compute how that would evolve as to predict exactly how I would express my pain - I suspect many orders of magnitude more than the combined compute that exists - and the second paper contains the entire physical state of me at then end of that expression, then yes, there is an experience of pain taking place. However, if two computers do this in parallell, then there are NOT two experiences of pain taking place. I add this because this is crucial to my position, which is easily misunderstood.
If you think that pain is a mental object, then yeah, that sounds totally crazy. But I don't see how fancy cells inside skulls or electromagnetic fields would make it any less crazy.
As far as I can see, you either gotta propose violations to ordinary physics, or you must accept functionalism.
I have strong Swedish origins on my father’s side, who’s now 83. He’s from what used to be an entirely Swedish farming town called Lindsborg in the heart of Kansas. When he was a kid it was illegal to sell alcohol there and only people of their particular religion were permitted to stay in town over night. Today it’s instead a quaint Swedish themed tourist and still farming town on the highway that adds a bit of culture to the land in general. It was my father’s grandfather Sven that actually came from the old country. I think I’ve heard however that he shortened a surely far more elaborate name down to “Borg” so that he might fit in better here. I thank him for that! “Mark Slight” seem’s pretty English too so I presume it’s a pseudonym that works better in English speaking chat rooms rather than your actual name. When in Rome…. But then I also have an online friend who goes by “Matti Meikäläinen”, which I’m sure you know should not be his actual name. It’s all fine to me though.
In any case it’s nice to meet you — I enjoy speaking with people about this stuff who are clearly more intelligent than I happen to be. We’ll surely disagree a great deal here, but hopefully in intelligent ways that should be enjoyable for a while, or maybe even much longer. Of course we all like to think that we’d change our minds given strong evidence of error. That does seem quite rare though. Instead this sort of thing seems to be played as a competitive sport to potentially win or lose. It’s just that unlike tennis, unfortunately clear rules or judges don’t exist to demonstrate who’s winning and who’s losing!
I wonder if you’d say that your position is inspired by anyone specifically? Dennett comes to mind, or even his apparent successor, Keith Frankish? I have the sense that much of this goes back to Alan Turing’s Imitation Game. If you can’t tell whether or not the computer that you’re speaking with happens to instead be a human, then that computer must be armed with educated human grade consciousness. And if such consciousness can so easily reside in highly advanced but still standard computers, then there should be no problem porting such consciousness across the internet, or to conscious robots in space that are thus able to do what biological things cannot. Also people today have great sci-fi fun by thinking about how a given person’s consciousness might one day move from biological existence into technological machine existence so that he/she/it needn’t ever die (or perhaps more appropriately, become permanently erased). Furthermore this perspective incites massive fears that the conscious computers that we build will become “super intelligent” and therefore we humans will become irrelevant to these theorized amazingly powerful beings. From what I can tell this is all based upon the simple presumption that things are conscious specifically because humans think that they function like they’re conscious (so Alan, thanks for infecting us with that nonsense!). I think the whole thing will come crashing down once a more scientifically tractable position gains empirical validation.
You’re entirely correct that I consider pain to be something that “ultimately” comes online by some means. This is to say that it has causal emergence, or the very thing that I presume of everything else that exists. And how could I know that there must be such a thing as “pain”? Because it hurts me, or the very idea that’s usefully defined as “pain”. If you don’t believe in pain then I guess you go beyond simple illusionism (people who supposedly reject the existence of spooky conceptions of pain), to eliminativism (people who reject the existence of pain itself). I don’t know what to do with this second group. As I recall Suzi got into this regarding the Churchlands when I first began over there last year, as well as that I talked with someone fun who opposed me on the matter. I guess it’s something like our various terms like “pain” themselves are so wrong that they can’t be salvaged. But how might humanly fabricated terms regarding what we feel, be impossible to usefully define? So here I just smile at more of the same academic nonsense that continues to fail humanity.
Intrinsic pain awfulness? Yes I’m quite sure that this makes causal sense. I’m not in the business of telling causality how it ought to work, but I can certainly say that the awfulness of existing in pain should exist before I react to feeling such awfulness. Theoretically evolution took valueless biological robots, and when the right chance came it transformed an epiphenomenal value dynamic into a functional mode of that we now call “consciousness”.
On McFadden’s theory, it’s definitely both computational and functional in any literal sense. If you’re interested then I’ll tell you more about it. I believe that I’ve even devised a pretty reasonable way of empirically demonstrating that his theory should be either be true or false (that is should anyone with the funds want to check it out, not that I think it would be very costly to try). Note that since there is no known physics of consciousness, McFadden’s theory couldn’t possibly violate any established physics. Only after the empirical validation of a theory like his would it be possible for something to violate that particular physics.
So on to your challenge: 1) No I don’t consider grey “less quale” than red. 2) Given that I consider the term “conceivable” in line with “imaginable”, and therefore to include all of the magical nonsense that I can imagine, I wonder if you’d rather replace it with “metaphysically possible”? In that case do I consider your and my reds essentially the same? In one sense yes I do. The validation of McFadden’s theory would say that the experience of red exists under certain specific EMF parameters. But then red also ought to trigger memories and such in one person that is different from another, so not the same in that sense. 3) Here the redness would be considered input to the conscious form of function whereas reaction would be output. So they should be two sequential steps in the evolved form of conscious function. 4) When English only speakers hear Swedish terms they obviously don’t get the meaning that Swedish speakers do. So the exact same sounds should create separate effects in that regard for these different people. 5) Let’s go with “metaphysically possible” here again given my naturalism rather than just “conceivable” — I can conceive of all sorts of bullshit that says nothing about our world. And like before, if red is empirically demonstrated to exist under certain parameters of neurally produced EMF, then it should feel different than grey. So it’s somewhat like the smell of coffee is different than the smell of roses — different perceived chemicals here and different perceived radiation there. 6) Yes the world should be less colorful when there is no color perceived. 7) People who have a difficulty distinguish between certain colors obviously have that specific impairment to deal with. But for other colors the same essential physics should be working for all of us that do see them and so no such impairment.
What you say here gives me some hope that if you were to grasp McFadden’s proposal, then you’d also consider it consistent with functional computationalism. Thus just as he blew me away in 2020, you might also then be blown away. In that case however you’d have to acknowledge that consciousness would indeed be made of something causally specific and so would be just as tangible as all else that’s known, or the very thing that you’ve been taught to dispute. So maybe that would be a dealbreaker. If highly validated empirically that consciousness exists in the form of certain parameters of electromagnetic field however, I do presume that this would change humanity no less than natural selection, relativity, the atomic model, and so on. I hope this would even incite a great revolution in the pathetically weak field of psychology.
Generally agree, but one distinction is that a human in a room is a prisoner, whereas a brain in a skull is not. Makes some kind of difference, surely?
The human in the room is not like a person (or brain) - except himself. The human in the room is not analogous to the human AND the room. This is a core point.
The human AND the room is like a person. In this particular case, the human-room system thinks it's the human, so in that sense it is a prisoner. But it might as well have been a free human, if Searle + room simulated Searle in some previous state before he entered the room.
So I'm not sure what you're getting at. Maybe I'm misunderstanding you? Anyway, thanks!
Thanks for the response! Mechanistically I think you are right but the fact that a human in a room would not like this state of affairs indefinitely, whereas a brain in a skull would, means that the human in the room is actually quite different than another person.
This gets at the distinction between explaining things and predicting things. Instrumentalism vs theory if you like.
It's perhaps a tangent on your point, but the pedant in me had to say this! I've thought about the Chinese room experiment in the past :)
Thanks to you too!
Again, I'm not comparing the human in the room to a brain. I'm comparing the human-room system to a human. The human-room system is similar to a person, the human in the room is not.
A brain cannot either like or dislike sitting in a skull. Brains don't have preferences about that. Only individuals have preferenses.
Yeah maybe it's tangental and maybe I'm still misreading you. Anyway, thanks!
No argument from me.
Searle's Chinese Room is one of those arguments that struck me as silly the first time I met it.
The challenge is working out why people believe it.
Hi Mark. I don’t think Searle truly grasped the magic of functional computationalism. Therefore I don’t think his Chinese room thought experiment was designed well enough to truly hit home. I’d love for you to consider an argument of my own however:
When your thumb gets whacked, according to modern neuroscience information gets neurally sent to your brain about this event. Furthermore it’s thought that your brain does various things with this information to ultimately cause you to feel thumb pain. The question is, what does it do?
Functional computationalists maintain that your brain properly processes it into new information, and it’s this processing from the first information to the second that creates what you thus feel. Therefore if there were marks on paper that were correlated well enough with the information that your whacked thumb sends your brain, and it were scanned into a computer that processes it to then print out paper with marks on it that correlate well enough with the information that resulted from your actual brain’s processing, then something in this paper to paper conversion would thus feel what you do when your thumb gets whacked.
So many believers have told me that there’s nothing silly about this reductio that I don’t rest my case yet. The reason that I consider the inspiring position magical, is because actual computers do something more than just process information. Instead their processed information goes on to inform causally appropriate instruments like computer screens, memory banks, and so on. Therefore I’m not saying that functional computationalism is entirely false, but rather that it stops one step short of full causality.
At this point we can ask what processed brain information might inform to thus exist as an experiencer of thumb pain? This question puzzled me for several years, that is until I came across Johnjoe McFadden’s electromagnetic consciousness proposal. There are so many questions that this explanation makes sense of that I think future people will smile about how long humanity remained confused here.
Hi Eric, and thank you! Your name is very Swedish sounding, if I may say so (although Erik is a more common spelling). Borg is a common name here, and it means fortress more or less.
"Furthermore it’s thought that your brain does various things with this information to ultimately cause you to feel thumb pain. The question is, what does it do?"
I disagree entirely. I don't think neuroscience is telling us that. Computational functionalism certainly isn't saying that. There may be variants that do, but not as I understand it.
The mistake you're making here, according to us, is that you're viewing pain as a phenomena that "ultimately" comes online by some means. If you think about it like that, of course it's going to puzzle you. There is no "step" when the pain "happens. That would certainly be magical.
You're conceptualising pain as a mental object that you, as a mental subject, is observing, or experiencing. That's a natural way to conceptualise it. Just as we model ourselves as subjects in a physical environment, we also model ourselves as mental subjects in an environment of mental thoughts, feelings, pain, hunger, etc. We're "objectifing" them. This is highly useful for cognition and communication. Talking about the realness and depth of one's love is a lot more succesful for procreation, than talking about how ones love is an illusion, not really love. That hasn't been a path to success. This way of modeling ourselves and our inner life is, however, no good for doing philosophy of mind.
The crucial mistake is to think that neuroscience must explain where or when the pain happens. That has not succeeded, nor will it ever. The pain does not live anywhere in the brain, nor does it live in a computation as a specific, well definable part of that computation. It is not a mental object, nor is it a well defined computational object.
What neuroscience must explain is how we model ourselves so as to behave as if we are in pain. That includes all behaviour, from avoidance to screaming to speech acts to introspective reports to deep philosophical analysis. THAT is what needs explaining, and then we are done. There is no extra pain-ness to explain. There's no "essence", no matter how much our self-modeling makes us insist that there is. It is the insisting we need to explain! This is problem for neuropsychology, neuroscience and evolutionary psychology to explain. Not physicists or armchair philosophers.
Try to conceive of pain that does not make you scream, cringe, say that you are in pain, inpinges on your thought process. It has intrinsic awful pain-ness to it, but it is not doing anything. You sure that makes sense?
I'll also point out that it is not a coincidence that redness and pain comes up all the time in these qualia discussions. It's because we've evolved to react strongly to these stimuli. Almost nobody seems mystified by the greyness of grey or the feel of a barely noticable touch. If you're more mystified with redness than greyness, that is because of what red stimuli DOES to you, it's functionality.
McFadden seems to me to 1. propose a solution to a problem that does not exist and 2. not have a theory of consciousness at all.
I'm not saying electromagnetic fields are irrelevant to the brains workings. I'm no physicist, but it seems to me electromagnetic fields are everywhere, and that they are crucial in the behaviour of ions fluxing in and out of neurons. But that's just physical mechanics. To me it doesn't matter if he's right that electromagnetic fields are significant in ways that others have not anticipated. That still doesn't bring us any closer to explaining consciousness in some non-computational non-functional way.
If his theory is correct, and it is something over and above what ordinary physics predicts, then it is in principle possible to detect that ordinary physics is being violated in brains. In such a case, i still see no proposal of what consciousness actually IS, and how it gets translated into information processing and expressed in language.
If his theory, on the other hand, is to be interpreted merely as a proposal of what kind of physics-compliant phenomena are playing out in the brain, then that fits neatly into computational functionalism. No sensible functionalist claims that we have the complete picture of all the physical, biochemical, neuroelctric mechanisms in brains! We're discovering more every day. But none of it implies that functionalism is wrong.
"Functional computationalists maintain that your brain properly processes it into new information, and it’s this processing from the first information to the second that creates what you thus feel."
Again, no that is not what we're claiming.
"Therefore if there were marks on paper that were correlated well enough with the information that your whacked thumb sends your brain, and it were scanned into a computer that processes it to then print out paper with marks on it that correlate well enough with the information that resulted from your actual brain’s processing, then something in this paper to paper conversion would thus feel what you do when your thumb gets whacked."
I'm not sure I'm following you. First of all, I don't see how adding electromagnetic fields is any less magical. That sure doesn't explain how this quale pops into existence?
Second, as stated, I don't think pain is a thing in the way you envision it. You're not using the functionalist view on what pain is when you're rejecting it.
If your computer contains an accurate model of the whole physical system of my whole body and can compute how that would evolve as to predict exactly how I would express my pain - I suspect many orders of magnitude more than the combined compute that exists - and the second paper contains the entire physical state of me at then end of that expression, then yes, there is an experience of pain taking place. However, if two computers do this in parallell, then there are NOT two experiences of pain taking place. I add this because this is crucial to my position, which is easily misunderstood.
If you think that pain is a mental object, then yeah, that sounds totally crazy. But I don't see how fancy cells inside skulls or electromagnetic fields would make it any less crazy.
As far as I can see, you either gotta propose violations to ordinary physics, or you must accept functionalism.
I would also be curious on how you respond to this little challenge I put together! https://substack.com/@markslight/note/c-106174443?utm_source=notes-share-action&r=3zjzn6
I have strong Swedish origins on my father’s side, who’s now 83. He’s from what used to be an entirely Swedish farming town called Lindsborg in the heart of Kansas. When he was a kid it was illegal to sell alcohol there and only people of their particular religion were permitted to stay in town over night. Today it’s instead a quaint Swedish themed tourist and still farming town on the highway that adds a bit of culture to the land in general. It was my father’s grandfather Sven that actually came from the old country. I think I’ve heard however that he shortened a surely far more elaborate name down to “Borg” so that he might fit in better here. I thank him for that! “Mark Slight” seem’s pretty English too so I presume it’s a pseudonym that works better in English speaking chat rooms rather than your actual name. When in Rome…. But then I also have an online friend who goes by “Matti Meikäläinen”, which I’m sure you know should not be his actual name. It’s all fine to me though.
In any case it’s nice to meet you — I enjoy speaking with people about this stuff who are clearly more intelligent than I happen to be. We’ll surely disagree a great deal here, but hopefully in intelligent ways that should be enjoyable for a while, or maybe even much longer. Of course we all like to think that we’d change our minds given strong evidence of error. That does seem quite rare though. Instead this sort of thing seems to be played as a competitive sport to potentially win or lose. It’s just that unlike tennis, unfortunately clear rules or judges don’t exist to demonstrate who’s winning and who’s losing!
I wonder if you’d say that your position is inspired by anyone specifically? Dennett comes to mind, or even his apparent successor, Keith Frankish? I have the sense that much of this goes back to Alan Turing’s Imitation Game. If you can’t tell whether or not the computer that you’re speaking with happens to instead be a human, then that computer must be armed with educated human grade consciousness. And if such consciousness can so easily reside in highly advanced but still standard computers, then there should be no problem porting such consciousness across the internet, or to conscious robots in space that are thus able to do what biological things cannot. Also people today have great sci-fi fun by thinking about how a given person’s consciousness might one day move from biological existence into technological machine existence so that he/she/it needn’t ever die (or perhaps more appropriately, become permanently erased). Furthermore this perspective incites massive fears that the conscious computers that we build will become “super intelligent” and therefore we humans will become irrelevant to these theorized amazingly powerful beings. From what I can tell this is all based upon the simple presumption that things are conscious specifically because humans think that they function like they’re conscious (so Alan, thanks for infecting us with that nonsense!). I think the whole thing will come crashing down once a more scientifically tractable position gains empirical validation.
You’re entirely correct that I consider pain to be something that “ultimately” comes online by some means. This is to say that it has causal emergence, or the very thing that I presume of everything else that exists. And how could I know that there must be such a thing as “pain”? Because it hurts me, or the very idea that’s usefully defined as “pain”. If you don’t believe in pain then I guess you go beyond simple illusionism (people who supposedly reject the existence of spooky conceptions of pain), to eliminativism (people who reject the existence of pain itself). I don’t know what to do with this second group. As I recall Suzi got into this regarding the Churchlands when I first began over there last year, as well as that I talked with someone fun who opposed me on the matter. I guess it’s something like our various terms like “pain” themselves are so wrong that they can’t be salvaged. But how might humanly fabricated terms regarding what we feel, be impossible to usefully define? So here I just smile at more of the same academic nonsense that continues to fail humanity.
Intrinsic pain awfulness? Yes I’m quite sure that this makes causal sense. I’m not in the business of telling causality how it ought to work, but I can certainly say that the awfulness of existing in pain should exist before I react to feeling such awfulness. Theoretically evolution took valueless biological robots, and when the right chance came it transformed an epiphenomenal value dynamic into a functional mode of that we now call “consciousness”.
On McFadden’s theory, it’s definitely both computational and functional in any literal sense. If you’re interested then I’ll tell you more about it. I believe that I’ve even devised a pretty reasonable way of empirically demonstrating that his theory should be either be true or false (that is should anyone with the funds want to check it out, not that I think it would be very costly to try). Note that since there is no known physics of consciousness, McFadden’s theory couldn’t possibly violate any established physics. Only after the empirical validation of a theory like his would it be possible for something to violate that particular physics.
So on to your challenge: 1) No I don’t consider grey “less quale” than red. 2) Given that I consider the term “conceivable” in line with “imaginable”, and therefore to include all of the magical nonsense that I can imagine, I wonder if you’d rather replace it with “metaphysically possible”? In that case do I consider your and my reds essentially the same? In one sense yes I do. The validation of McFadden’s theory would say that the experience of red exists under certain specific EMF parameters. But then red also ought to trigger memories and such in one person that is different from another, so not the same in that sense. 3) Here the redness would be considered input to the conscious form of function whereas reaction would be output. So they should be two sequential steps in the evolved form of conscious function. 4) When English only speakers hear Swedish terms they obviously don’t get the meaning that Swedish speakers do. So the exact same sounds should create separate effects in that regard for these different people. 5) Let’s go with “metaphysically possible” here again given my naturalism rather than just “conceivable” — I can conceive of all sorts of bullshit that says nothing about our world. And like before, if red is empirically demonstrated to exist under certain parameters of neurally produced EMF, then it should feel different than grey. So it’s somewhat like the smell of coffee is different than the smell of roses — different perceived chemicals here and different perceived radiation there. 6) Yes the world should be less colorful when there is no color perceived. 7) People who have a difficulty distinguish between certain colors obviously have that specific impairment to deal with. But for other colors the same essential physics should be working for all of us that do see them and so no such impairment.
What you say here gives me some hope that if you were to grasp McFadden’s proposal, then you’d also consider it consistent with functional computationalism. Thus just as he blew me away in 2020, you might also then be blown away. In that case however you’d have to acknowledge that consciousness would indeed be made of something causally specific and so would be just as tangible as all else that’s known, or the very thing that you’ve been taught to dispute. So maybe that would be a dealbreaker. If highly validated empirically that consciousness exists in the form of certain parameters of electromagnetic field however, I do presume that this would change humanity no less than natural selection, relativity, the atomic model, and so on. I hope this would even incite a great revolution in the pathetically weak field of psychology.