Image 01

Discussion: Dynamics, information, representation

September 5th, 2011 by Thomas Buehrmann

Reply here to provide your challenge to the speakers of the first day’s general discussion. This should take the form of: i) a tweet-like question (maximum of 144 characters) and ii) a follow up explanation, abstract or set of bullet points with a minimum of 150 words and maximum of 300, in the following format:

Group Name
Short description of challenge in bold
Content of the summary or abstract or bullet points

 

Tags: , , ,

7 Responses to “Discussion: Dynamics, information, representation”

  1. Charles Lenay Charles Lenay says:

    Group 5

    How should we describe the constitution of the phenomenological experience of the agent ?
    In the first talk Randall D. Beer, proposed two mathematical tools to describe the system and show that they are equivalent. But this system is just a robot that makes categorizations. Are the two tools equivalent even for the description of enaction of a phenomenological domain ? (The categorization task is done without movement of the agent. Is it enactive ?).
    In the second talk, the two levels of description proposed by Inman Harvey are perhaps sufficient to describe the “desire” of the robot, but this functionalist solution does not seem sufficient to understand the phenomenology of the agent.

  2. Matthew Egbert Matthew Egbert says:

    Group 1

    Autopoiesis defines a way to distinguish between the agent and its environment. How do we carry this important idea over into the dynamical or information theoretic analysis methods?

    Can information theoretic and dynamical analysis help to identify and understand the presence of an autonomous self? How? Are other ideas / mathematical notions necessary?

    In the concrete case of Beer’s model, we say variable x and y are “part of the agent” and w and z are not. This is a critical distinction. How do we make it in a principled manner that gels well with our natural concept of an agent?

  3. Mario Villalobos says:

    Group 3

    Is there a preferred language to talk about what is really going on in the system?

    Randall Beer mentioned that both mathematical tools of DST and Information theory are neutral with respect to what is really going on in a specific system. But is one way of looking at the system perhaps more fruitful for understanding embodied cognition? Or should we be pluralistic with regard to descriptions? If there is a difference in appropriateness of application, doesn’t this reveal an ontological commitment?

  4. Kelly Vassie Kelly Vassie says:

    Group 4

    How can we scale Beer’s work up to model more complex systems, and what is the target – what do these things explain at a higher level?

    What can we explain now with these tools?

    Are dynamical systems and information theory combinations able to be scaled-up to more complex cognitive systems?

    IT focuses on pairwise measurements/analysis/relations between variables/units/elements of the system.

    How would it scale up to systems with more units with more complex relationships?

    With these tools, is it possible to study something more complex than minimal cognitive behaviour?

    What is the explanatory force of these two complementary “lenses” or descriptive frameworks (e.g. mechanical, nomological, statistical)? What sorts of facts do these techniques help to reveal?

  5. Manuel Ebert says:

    How to extend the concept of SMCs?

    1: See what other phenomena (=processed) SMCs can be applied to (=fall into the domain of lawful relations we dub SMCs)
    - Also, what can NOT be explained by SMCs?

    2: Figure out what the necessary and sufficient conditions are for SMCs.
    - What’s the relation to intentions, values, motivation…?
    - On which levels of abstraction do SMCs occur?
    - Can SMCs be hierarchical?

  6. Athena Demertzi says:

    Group 6
    Title: Intentionality action and models as tools
    Questions:
    1. Do representations require at least an implicit form of intentionality?
    2. How far does DST help as a tool to understand representation?
    3. What value is the DST to biologists, psychologists, philosophers or …
    4. Are these kinds of minimal systems useful for making progress on radical enactivism.
    5. Can DST tell the full story of cognition?
    6. Empirical, theoretical , future robotics: their potential for understanding cognition under discussion.

  7. Summary of challenges:

    Group 1
    Can information theoretic and dynamical analysis help to identify and understand the presence of an autonomous self? How? Are other ideas / mathematical notions necessary?

    Group 2
    How to extend the concept of SMCs? See what other phenomena SMCs can be applied to. What can NOT be explained by SMCs?
    What are the necessary and sufficient conditions for SMCs? What’s the relation to intentions, values, motivation…? …

    Group 3
    Is there a preferred language to talk about what is really going on in the system? If there is a difference in appropriateness of application, doesn’t this reveal an ontological commitment?

    Group 4
    How can we scale Beer’s work up to model more complex systems, and what is the target – what do these things explain at a higher level?

    Group 5
    How should we describe the constitution of the phenomenological experience of the agent? Are IT and DST equivalent even for the description of enaction of a phenomenological domain? Is Harvey’s functionalist solution sufficient to understand the phenomenology of the agent?

    Group 6
    Intentionality action and models as tools: Do representations require at least an implicit form of intentionality? How far does DST help as a tool to understand representation? Are these kinds of minimal systems useful for making progress on radical enactivism. Can DST tell the full story of cognition?

Leave a Reply

You must be logged in to post a comment.