Problems with Searle’s Programs cannot think argument:

1) Why is intentionality not considered a program in its own right? Is there something about beliefs and desires that cannot be encapsulated in a program?

2) The ethical ramifications of the Chinese Room Experiment imply that this does not hold amongst macro-systems (i.e, Just War Theory, and business/economic ramifications of following programatic rules - invest low, sell high, military orders, etc.

Intentionality could be pre-instantiated desires? It is pretty clearly understood in Behaviorist text that an incredible amount of learning comes from negative and positive reinforcement. Arguably there needs to be some mental consideration of “Good” vs “Bad” which does not require any sort of semantic understanding. The formal syntax around “Good” and “Bad” can, and does in most animals, start off very simply. Pain, bodily harm == bad. Pleasure, satiety, removal of harm, removal of stress == good. The next step beyond this implementation of base desires is belief - what do you believe will satisfy your base desires? What desires do you ought to have to encourage the satisfaction of them?

As an addendum to the Chinese Experiment, imagine that each script could result in two different Chinese phrases to return. The “correct” one would result in the man inside the room recieving lunch. The “incorrect” one results with the man inside the room starving for one meal. Now, we are using the man as a proxy for intentionality, but the man could very easily be replaced with a robot programmed to self-preserve i.e to remain turned on, as a function of electrical power.

To add to this, we can add a memory component (although Searle seems to imply memory is a feature of the Chinese Room as the man inside the Chinese room could internalize structures and send out Chinese symbols). Each memory would consist of previous responses and whether or not they provided “good” or “bad” results.

Every component added thus far seems to be programatic - in fact, each component can be designed as a Chinese room. We already have programs that handle memory in computers. We already have programs that handle self-preservation in computers.

Ultimately this answer to Searle’s Chinese Room problem becomes a systems reply - together, the components represent consciousness, even as the man itself as a formal syntactic engine, does not.

Searle seems to be focused on the emergent property of consciousness, and conflates it with the necessity of intentionality. Yet intentionality is clearly capable of being modeled in a Chinese Room - and the Chinese Room is clearly capable of engaging with other Chinese Room structures. All it requires is desire, which can be instantiated by the program, and predictive capabilities, which are programs in themselves and can be analogous to beliefs.

On a related note, the thought experiment provided requires the use of a Chinese phrasebook, not a Chinese dictionary - there is no mechanism allowed to correlate desire with creation of engagement. Fundamentally it is impossible to introduce intentionality into the mechanism. A clearer model would have multiple options, and associate some form of desire and belief calculation. Searle implicitly claims that intentionality cannot be a formal symbol manipulation, but all you have to do is parse the information and match it to desire fulfillment functions. If we accept that learning comes from the enhancement of base intentional structures through memory, then it is clear that we can parse inputs and calculate out the most desireable outcome.

The argument then becomes, there are more intentional states and not all intentional states can be reduced to base desires.I will accept that this is true, but the creation of new intentional states only comes from the fulfillment of base desires, which are instantiated through evolutionary processes. It does not seem particularly difficult to suggest that intentional states can be construed as function matching -

If we take consciousness to be the sense of experience, then the man in the Chinese room does not have the “experience” of understanding Chinese. There is no phenomenal aspect to the program - unless we take phenomenal to be a check on the state of systems as they interrelate to each other. To paraphrase someone famous, to know the color red is more than just knowing its formal symbols, it is to experience it. This appears to be Searle’s main point, lost in the definitions of intentionality. There is no mechanism of experience or feeling. No one would say that the simple compulsions represented in the Intentionality Chinese Room would mean that Searle in the Original Chinese Room would know what it means to speak Chinese, nor would they be able to say that the entire system knows what it means to experience something.

We know this of consciousness - either Searle is right, and there is as of yet a missing link in what is required to represent consciousness/experience a phenomenon; consciousness is simply a higher order program that we have not pieced together yet;