Results 1 to 20 of 42

Thread: Social Contagion theory

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Council Member Nat Wilcox's Avatar
    Join Date
    Jul 2007
    Location
    Houston, Texas
    Posts
    106

    Default I don't know social contagion theory,

    but there is (what seems to be) a related collection of theories of imitative behavior, "herding" and so forth in decision and game theory, plus an experimental literature that looks at it.

    You can think of imitation as something a reasonable person 3 might do when only weakly informed themselves, even when their own weak information considered alone recommends non-imitation. Suppose person 3 has weak information favoring action Y over action X. But person 3 also observes that persons 1 and 2 have chosen to take action X rather than Y. Suppose also that person 3 believes that her own information is no better than that of persons 1 and 2, and also believes that her own values and/or goals are not very different from those of persons 1 and 2. Then it may be reasonable for her to completely ignore her own information, and imitate persons 1 and 2.

    Now, let person 4 show up, also in the same situation as person 3 was. He sees three people taking action X over Y. And so he too ignores his own information. And so forth. Decisions "herd together" and it becomes reasonable for all subsequent weakly informed decision makers to ignore their own information. Fittingly, the theorists call this an "information cascade."

    Interestingly, "reverse cascades" can form when everyone receives only noisy and weak information...these are cascades where everyone takes the "wrong" decision..."wrong" in the sense that, if there was some alternative way to pool everyone's (individually ignored) information publicly, it would be clear that everyone had been fooled into herding on the wrong decision.

    As I mentioned, there is some experimental literature on information cascades if anyone is interested in them. I suspect that, for the purposes of this group, information cascades and other kinds of "reasonable herding" are most useful as an idea to keep in mind--simply, that there may be "good reasons" for any actor in a social network to imitate or herd, so that it may be a difficult thing to fight or alter.

    It also suggests that one way to break cascades and herds is to supply alternatives to inferring information from the observation of decisions...alternatives that allow for all individuals' weak bits of information to be combined publicly. The whole problem with cascades is that once enough decisions line up so that people ignore their own information in making their own decision, future decisions become uninformative.
    Last edited by Nat Wilcox; 07-05-2007 at 09:20 PM. Reason: little errors...whoops

  2. #2
    Council Member Dominique R. Poirier's Avatar
    Join Date
    Jun 2007
    Posts
    137

    Default

    Quote Originally Posted by Nat Wilcox View Post
    but there is (what seems to be) a related collection of theories of imitative behavior, "herding" and so forth in decision and game theory, plus an experimental literature that looks at it.

    You can think of imitation as something a reasonable person 3 might do when only weakly informed themselves, even when their own weak information considered alone recommends non-imitation. Suppose person 3 has weak information favoring action Y over action X. But person 3 also observes that persons 1 and 2 have chosen to take action X rather than Y. Suppose also that person 3 believes that her own information is no better than that of persons 1 and 2, and also believes that her own values and/or goals are not very different from those of persons 1 and 2. Then it may be reasonable for her to completely ignore her own information, and imitate persons 1 and 2.

    Now, let person 4 show up, also in the same situation as person 3 was. He sees three people taking action X over Y. And so he too ignores his own information. And so forth. Decisions "herd together" and it becomes reasonable for all subsequent weakly informed decision makers to ignore their own information. Fittingly, the theorists call this an "information cascade."

    Interestingly, "reverse cascades" can form when everyone receives only noisy and weak information...these are cascades where everyone takes the "wrong" decision..."wrong" in the sense that, if there was some alternative way to pool everyone's (individually ignored) information publicly, it would be clear that everyone had been fooled into herding on the wrong decision.

    As I mentioned, there is some experimental literature on information cascades if anyone is interested in them. I suspect that, for the purposes of this group, information cascades and other kinds of "reasonable herding" are most useful as an idea to keep in mind--simply, that there may be "good reasons" for any actor in a social network to imitate or herd, so that it may be a difficult thing to fight or alter.

    It also suggests that one way to break cascades and herds is to supply alternatives to inferring information from the observation of decisions...alternatives that allow for all individuals' weak bits of information to be combined publicly. The whole problem with cascades is that once enough decisions line up so that people ignore their own information in making their own decision, future decisions become uninformative.
    Nat,
    All you say seems to be inspired by observations about stock market. Isn’t that so?

  3. #3
    Council Member Nat Wilcox's Avatar
    Join Date
    Jul 2007
    Location
    Houston, Texas
    Posts
    106

    Default Dominique,

    Yes and no. The first theoretical paper I know of did indeed have some finance researchers amongst the coauthors; it's definitely right to say that the statistical logic of cascades partly emerged from thinking about fads, fashions, bubbles and crashes in asset markets. The funny thing, though--as with many such things--is that the formal logic of a cascade doesn't precisely apply to an asset market (or at least, not all of it), so that there is a bit of a disconnect there between the kinds of phenomena that inspired the cascade and their own formal logic.

    The reason for this is that the "discreteness" of the observed decision (the "choose X or Y" scenario I described) in a true cascade is crucial to its logic. In a situation where people observe and choose bids and asks, which are roughly continuous, the person 3 can indeed reveal new information by the bid or ask she submits, although she would of course sensibly pay attention to, and be influenced by, the previously observed offers of persons 1 and 2. But this is a crucial difference, because person 3 (and then person 4, and so on) will continue to reveal new and useful information by their actions. The thing about true information cascade situations that is uniquely interesting, is that there comes a point in them where everyone ceases to act at all on their own information.

    Inferences from others' decisions is a situation where the quantal (or discrete) versus the continuous turns out to be a crucial distinction (theoretically speaking, anyway).

    The Wikipedia entry linked below, as you will see, is a bit internally confused (it first points out the crucialness of the discreteness of the decision to the logic, but then goes on to assert some connection to asset markets where decisions are not discrete). But it has a lot of good links to large bibliographies. You can see in these bibliographies that there have been many applications to technology adoption, voting, etc.

    http://en.wikipedia.org/wiki/Informational_cascade

  4. #4
    Council Member Dominique R. Poirier's Avatar
    Join Date
    Jun 2007
    Posts
    137

    Default Nat,

    Nat,
    There is something that disturbs me in all this and I explain why.

    The fact that a number X of persons chose the wrong way because they tend to rely in priority to what they see, and not to what they have been told cannot be summed up in a unifying branch of science or theory we would name “information cascade.”

    In other words, I perceive it as a deductive fallacy.

    Game theory doesn’t apply to your problem since there is no game in that case. If the first person takes the wrong way, then it doesn’t change the value of the winning prize, according to the rules you set up. While persisting, with a bit of humor, on using the game theory terminology one might say that this case is a “multi-player unlimited-sum non-game.”

    Also, another missing element in that problem to make it a case relevant to game theory at some point is the notion that “win” implies “loss” for others. Actors involved in that case scenario may win or lose, individually and it doesn’t entail further consequences for the others. In this sense, and in supposing that we would like to find solutions through hard sciences such as logics or mathematics, then we might find them in theory of gambling or statistical logic. For, when the person 3 chooses to rely on what he witnessed instead of what he learned or was told before, then he is just gambling!

    I justify my point.

    Consider for a while that the person 3 is a “player” I’ll name P3, and that the precedent actions of the players 1 and 2 may be assimilated to “events” I’ll name E1, E2, and En if ever we were eventually considering the case of the persons 4, 12, 97, or more.
    Equally I temporarily consider that the cues the player 3 has been provided with constitute an event too since notions of space, time, and history may apply the same way (he has been told or learned this information somewhere, sometimes). So I’ll name this last event E0. The sole difference between E0 and the events E1, E2 and En is a formal difference. The former is a told event of uncertain value while the latter constitute seen events of equal uncertain value.

    In all cases, E1 and E2 may be considered as randomly generated events since there is no clear cut indication that these previous events constitute in themselves reliable sources of information, indeed.
    If P3 chooses to rely on E1 and E2, then he relies on assumptions since E1 and E2 constitute informations discrepant to E0. In other words, he is gambling; and his choice to take the left (wrong) way because, at least, he witnessed E1, demonstrates that he acted in compliance with the bandwagon effect theory or with human tendency toward mimetic. If mimetic or bandwagon effect apply to our problem, then we must turn to other fields we name crowd behavior or mass psychology or even consumer behavior, all fields that provide satisfactory answers to our question. And since there is a probability for that P3 may chose to rely on P0 and to take the right way then statistical logic applies too as it applies to consumer behavior.

    Since E1 and E2 constitute sources of information whose value are similar to this of E0, then P3 is confronted to the same situation in which a consumer is; as in the frame of another discipline called marketing and communication.
    From the standpoint of this other discipline I do not name science the greater the number of occurrence for a given message A (A meaning En since it may encompass E1, E2, and more) the greater the odds that P3 (or Pn, if ever the case arises) will chose A instead of B (B meaning N0, or Nn if the case arises). This fact is no mere theory since applied marketing and communication prove it on a daily basis.

    I see another way to tackle this problem which will consist in turning upside down the situation so as to make it relevant to the realm of game theory. In that case, E1, E2, and En would be summed up under the form of a mere message sent by a notional player we would name X; and, equally, EO would be a message sent by a second player we would name Y. The stake would be the number of persons successfully convinced (and so “won”) by each of these two players.

    Well, I could go on with other examples, possibly. But all we would learn from it is that there is no way to find solutions through the use of a new discipline we would name “information cascade” since such terminology would pretend to take precedence over already well known other fields which provide satisfying and verifiable explanations.

    Unless I missed something at some point, that’s the way I see it, Nat.

    Regards,
    Last edited by Dominique R. Poirier; 07-06-2007 at 10:11 AM.

  5. #5
    Council Member Dominique R. Poirier's Avatar
    Join Date
    Jun 2007
    Posts
    137

    Default

    Quote Originally Posted by Dominique R. Poirier View Post
    Nat,
    There is something that disturbs me in all this and I explain why.

    The fact that a number X of persons chose the wrong way because they tend to rely in priority to what they see, and not to what they have been told cannot be summed up in a unifying branch of science or theory we would name “information cascade.”

    In other words, I perceive it as a deductive fallacy.

    Game theory doesn’t apply to your problem since there is no game in that case. If the first person takes the wrong way, then it doesn’t change the value of the winning prize, according to the rules you set up. While persisting, with a bit of humor, on using the game theory terminology one might say that this case is a “multi-player unlimited-sum non-game.”

    Also, another missing element in that problem to make it a case relevant to game theory at some point is the notion that “win” implies “loss” for others. Actors involved in that case scenario may win or lose, individually and it doesn’t entail further consequences for the others. In this sense, and in supposing that we would like to find solutions through hard sciences such as logics or mathematics, then we might find them in theory of gambling or statistical logic. For, when the person 3 chooses to rely on what he witnessed instead of what he learned or was told before, then he is just gambling!

    I justify my point.

    Consider for a while that the person 3 is a “player” I’ll name P3, and that the precedent actions of the players 1 and 2 may be assimilated to “events” I’ll name E1, E2, and En if ever we were eventually considering the case of the persons 4, 12, 97, or more.
    Equally I temporarily consider that the cues the player 3 has been provided with constitute an event too since notions of space, time, and history may apply the same way (he has been told or learned this information somewhere, sometimes). So I’ll name this last event E0. The sole difference between E0 and the events E1, E2 and En is a formal difference. The former is a told event of uncertain value while the latter constitute seen events of equal uncertain value.

    In all cases, E1 and E2 may be considered as randomly generated events since there is no clear cut indication that these previous events constitute in themselves reliable sources of information, indeed.
    If P3 chooses to rely on E1 and E2, then he relies on assumptions since E1 and E2 constitute informations discrepant to E0. In other words, he is gambling; and his choice to take the left (wrong) way because, at least, he witnessed E1, demonstrates that he acted in compliance with the bandwagon effect theory or with human tendency toward mimetic. If mimetic or bandwagon effect apply to our problem, then we must turn to other fields we name crowd behavior or mass psychology or even consumer behavior, all fields that provide satisfactory answers to our question. And since there is a probability for that P3 may chose to rely on P0 and to take the right way then statistical logic applies too as it applies to consumer behavior.

    Since E1 and E2 constitute sources of information whose value are similar to this of E0, then P3 is confronted to the same situation in which a consumer is; as in the frame of another discipline called marketing and communication.
    From the standpoint of this other discipline I do not name science the greater the number of occurrence for a given message A (A meaning En since it may encompass E1, E2, and more) the greater the odds that P3 (or Pn, if ever the case arises) will chose A instead of B (B meaning N0, or Nn if the case arises). This fact is no mere theory since applied marketing and communication prove it on a daily basis.

    I see another way to tackle this problem which will consist in turning upside down the situation so as to make it relevant to the realm of game theory. In that case, E1, E2, and En would be summed up under the form of a mere message sent by a notional player we would name X; and, equally, EO would be a message sent by a second player we would name Y. The stake would be the number of persons successfully convinced (and so “won”) by each of these two players.

    Well, I could go on with other examples, possibly. But all we would learn from it is that there is no way to find solutions through the use of a new discipline we would name “information cascade” since such terminology would pretend to take precedence over already well known other fields which provide satisfying and verifiable explanations.

    Unless I missed something at some point, that’s the way I see it, Nat.

    Regards,
    Nat,
    I forgot to say one thing. In the discipline of communication, well identified behavioral patterns tell us that P3 is more likely to choose the left way (the wrong one) because:

    -he will be more receptive to the call of the latest message (so E1 and E2);
    -the message carried by E1 is repeated (E1 + E2) and so it has the advantage of repetition over the message E0;
    -and the bandwagon effect add to the two former.

  6. #6
    Council Member Nat Wilcox's Avatar
    Join Date
    Jul 2007
    Location
    Houston, Texas
    Posts
    106

    Default Dominique,

    A couple of observations.

    In the formal theory of games, there is no inherent restriction of the payoff functions (or goals, or values) of the players to be in strict conflict (e.g. if I win you lose). What constitutes a game from the perspective of the general theory is much less restrictive than that. There can be identity of interests; there can be a mixture of conflict and identity; and there need be no relationship at all of interests or values per se. But the formal theory of information cascades does not stand or fall on what class of theory we say it belongs to. Call it an inference situation if you like.

    It's not right for me to waste space here laying out the theory in all its formal detail...it looks like showing off, I think, and other people have done so in publicly available places. Here, for instance, is a very accessible description:

    http://faculty.wm.edu/lrande/links/cascade_handbook.pdf

    Now, to the central matter. Does the theory of an information cascade depend on assumptions about the behavior of the individual actors in its sequence? Of course it does. Does it depend on the assumptions that the individual actors make about one another? Yes, of course. Broadly speaking, the relevant assumptions are (i) reasonableness (the actors make roughly correct probabilistic inferences); (ii) decisiveness (the actors choose without error on the basis of their inferences); and (iii) common knowledge of (i) and (ii), that is that the actors believe that (i) and (ii) are true of everyone involved in the sequential interaction (this is very necessary).

    Are there other bodies of theory about similar phenomena that deny one or more of these assumptions? Yes, of course. But, denying any of those assumptions is also an assumption. Since these are assumptions about individual cognition and beliefs--not the kind of thing we can inspect by checking tatoos on people's forheads--in what sense is it helpful to point out that one theory depends on assumptions that are difficult to verify? That is true of any theory that is about cognition and/or beliefs.

    I personally think that it is a pretty sterile exercise to argue a priori about whether people are in general reasonable or believe that each other are so, and hence pretty pointless to try to rule in or out any bodies of theory on that basis. From the practical viewpoint of this community of applied science, different theoretical viewpoints will prove to be the best descriptive ones in different situations--I have no doubt of this. Only fools would not avail themselves of every theoretical framework (that is empirically supported) they can get their hands on. So if (say) Marct has an anthropological perspective on herd behavior that is "nonreasonable," I welcome this and hope everyone here does.

    The other thing here is that, as a matter of strategy in cognitive sciences, one needs reasonable benchmarks in order to understand what is descriptively going on--to classify failures of reason--in order to diagnose difficulties and suggest remedies, or to exercise control in a way that helps matters as opposed to making them worse. If a theory of mimetic behavior is NOT based on reason, then how does it deviate from one that does? If we don't know the answer to that, then it is difficult to say what individuals ought to do that they are not now doing, or indeed what we think a "good" state of affairs would be, socially speaking. And without that, it is hard to think about policy or intervention in a reasonable way.

    What I am meaning to say, is that (to use a word I highly dislike, because it is freighted with all kinds of bad associations) rational models are, even if not descriptive of what actually occurs, necessary benchmarks for understanding nonrational ones and then specifying policy or intervention. In this respect, I think it is vey counterproductive to try to rank-order rational and nonrational approaches to human cognition and social behavior. They are both necessary and they work hand-in-hand.

    David Marr, one of my cognitive sciences hero, once said "trying to understand perception by only looking at neurons is like trying to understand flight by only looking at feathers. It just can't be done. To understand flight, one has to understand something of aerodynamics. Only then do the shapes of wings and feathers begin to make some sense..." You can think of the theory of aerodynamics as "the rational model of flight." It helps someone who studies real birds in two ways: It helps the observer sympathetically view the design of wings and feathers as solutions to the problem of flight, but at the same time it helps that observer classify failures in those designs. Marr was a specialist on the artificial intelligence of vision. For him, for instance, you need a formal mathematical model of stereopsis in order to understand the information-processing problem (and the solutions our visual systems use) of constructing 3-d representations from angular disparity of 2-d images (that our two eyes get). Then you can appreciate not only what algorithms might do the job, but you can also compare the ways in which those different algorithms occasionally fail to do the job, to what humans actually report when they look at things.

    Finally, I think that another way of appreciating models based on "reasonableness" (a term I like much better than "rationality") is that it is a formal way of practicing the anthropologists' principle of sympathy or charity, in which the observing social scientist tries to understand how actors' behavior is reasonable or sensible from their own viewpoint (I would be interested in what the anthro folks here think about this...it is an idea I have had for awhile but I do not claim it is original or that it is clearly correct). Is it the only way? Surely not; but it is one way. Can we always achieve a full rational reconstruction? No, surely not; but trying helps us see (what could be) the method in other people's madness. If we are contemplating interventions into individual behavior or social processes, it is helpful to know that as a precaution: We don't want to inadvertently make things worse from actors' own subjective perspectives of what they are doing and why they are doing it. So it is helpful to know what sense and reason there might be in what they are doing. Does that make some sense?

  7. #7
    Council Member Nat Wilcox's Avatar
    Join Date
    Jul 2007
    Location
    Houston, Texas
    Posts
    106

    Default Dominique,

    do remember that those of us who are experimental economists, like myself, create situations for study in the lab that precisely match the specific instantiation of a theory that we wish to study except of course that the actors are real humans and not the idealized reasonable agents of the theory.

    So for instance, we could create a situation where the "truth" is "nonstationary" in which case reasonable solutions to the problem would involve putting greater weight on more recent observations.

    What is "correct" of course depends on the true underlying stochastic processes that govern the true state and generate private signals. But that is simply to say that one can create various versions of the inference situation specified in the information cascade story, not that the specific story is deductively flawed. It would be deductively flawed if it didn't follow logically from its own assumptions--but it does. Put in different assumptions about the underlying stochastic process governing the truth and/or the generation of private signals, and the same Bayesian reasoning will produce different recommendations about decisions--possibly not resulting in the phenomenon we call an information cascade. But that doesn't "disprove the information cascade story." It merely means that under different assumptions about the underlying processes, cascades shouldn't occur. And that becomes a useful observation for testing the theory in a laboratory (for obvious reasons).

  8. #8
    Council Member Dominique R. Poirier's Avatar
    Join Date
    Jun 2007
    Posts
    137

    Default A matter of trust, if I may say so...

    Nat,
    Once more, regretfully, there is something in your way of tackling the matter at hand, which, for the record, shifted to information cascade.

    Actually, what worries me is that you seem to be quick at challenging or questioning many research’s tools and approaches that countless renowned scientist have largely put to the test; successfully put to the test.

    I willingly agree that the general and fast paced evolution of science sometimes obliges us to reconsider certain approaches or even certain branches of science in their entirety. Some wonder, as striking example, whether our conception of mathematics fits the study of origin of the universe; what we use to call the “Big Bang,” in other words. The reputed physicist Stephen Hawking attempted to envisage a reversibility of time in an attempt to model his theory of the "Big Crunch."

    So long, so good.

    But, in my own opinion, it would be unwise to deduce from the aforesaid example and numerous others that we should be ready to challenge any of our landmarks each time we attempt to arrive at a solution. Stephen Hawking attempted to challenge only one of those landmarks; no more. He did it because he was probing into a hypothesis that might allow us to know whether the universe will be in endless expansion or not and to investigate the problem of the “missing matter.” And he just failed! Time is not reversible, and purely rational consideration helped him and us to understand why.

    I hope this example will help you understand how your answer surprises me.
    I may subscribe to your efforts to demonstrate the possible validity or interest of a new approach to tackle a specific problem. They are laudable, if not courageous. But your readiness at challenging or questioning any previous scientific approach that crosses the path of your will cannot but disturbs me at some point.

    For the record, the theory of game of strategy may be described as a mathematical theory of decision making by participants in a competitive environment. In a typical problem to which the theory is applicable, each participant can bring some influence to bear upon the outcome of a certain event; no single participant by himself nor chance alone can determine the outcome completely. The theory is then concerned with the problem of choosing an optimal course of action which takes into account the possible actions of the participants and the chance events.

    In my previous attempt to rationalize or abstract your problem of information cascade, I could reduce thing to a dilemma in which "P3" (i.e. the person 3) could set up a matrix payoff in an attempt to find the right solution. But since the way you introduce things says that P3 doesn’t know that he might be eaten by a bear, then there is no significant threat factor in all this to make it a game likely to solve other similar problems.

    Like most branches of mathematics, game theory has its roots in certain problems abstracted from life situations. The situations are those which involve the necessity of making decisions when the outcomes will be affected by two or more decision-makers. Typically the decision-makers’ preferences are not in agreement with each other. In short, game theory deals with decisions in conflict situations.

    A key word in what I have just said is abstracted. It implies that only the essential aspects of a situation are discussed in game theory rather than the entire situation with its peculiarities, ambiguities, and subtleties.
    If, however, the game theoretician is asked “What are the essential aspects of decision in conflict situations?” his only honest answer can be “Those which I have abstracted.”
    To claim more would be similar to maintaining that the essential aspect of all circular objects, or example, is their circularity.

    As example justifying my concern you question no less than the whole field of game theory when you begin your answer by:

    “In the formal theory of games, there is no inherent restriction of the payoff functions (or goals, or values) of the players to be in strict conflict (e.g. if I win you lose). What constitutes a game from the perspective of the general theory is much less restrictive than that. There can be identity of interests; there can be a mixture of conflict and identity; and there need be no relationship at all of interests or values per se.”

    In other words, my critics are that if you do not abstract the payoff functions and attempt instead to introduce notions such as "identity of interests", or "mixture of conflict and interest " in a matrix payoff, then I forecast great difficulties in your endeavor to help others in the frame of such questions.

    What I am saying prevails in game theory on a general basis, but, as I previously explained it, game theory will hardly applies to your example of information cascade as long as you do not introduce the presence of at least two notional players who send messages.

    While quitting the subject of game theory you say then:

    “Are there other bodies of theory about similar phenomena that deny one or more of these assumptions? Yes, of course. But, denying any of those assumptions is also an assumption. Since these are assumptions about individual cognition and beliefs--not the kind of thing we can inspect by checking tatoos on people's forheads--in what sense is it helpful to point out that one theory depends on assumptions that are difficult to verify? That is true of any theory that is about cognition and/or beliefs.

    I personally think that it is a pretty sterile exercise to argue a priori about whether people are in general reasonable or believe that each other are so, and hence pretty pointless to try to rule in or out any bodies of theory on that basis. From the practical viewpoint of this community of applied science, different theoretical viewpoints will prove to be the best descriptive ones in different situations--I have no doubt of this. Only fools would not avail themselves of every theoretical framework (that is empirically supported) they can get their hands on. So if (say) Marct has an anthropological perspective on herd behavior that is "nonreasonable," I welcome this and hope everyone here does.”


    From these two paragraphs on, you leave the object of our reflection to engage into considerations relevant to epistemology, this in order, seemingly, to make tabula rasa of any scientific landmarks we might use in the frame of your concern: information cascade. Thus your speech evolves distinctly toward philosophy until the end of your answer.

    In the end you simply besiege rationality.

    “Finally, I think that another way of appreciating models based on "reasonableness" (a term I like much better than "rationality") is that it is a formal way of practicing the anthropologists' principle of sympathy or charity, in which the observing social scientist tries to understand how actors' behavior is reasonable or sensible from their own viewpoint (I would be interested in what the anthro folks here think about this...it is an idea I have had for awhile but I do not claim it is original or that it is clearly correct). Is it the only way? Surely not; but it is one way. Can we always achieve a full rational reconstruction? No, surely not; but trying helps us see (what could be) the method in other people's madness. If we are contemplating interventions into individual behavior or social processes, it is helpful to know that as a precaution: We don't want to inadvertently make things worse from actors' own subjective perspectives of what they are doing and why they are doing it. So it is helpful to know what sense and reason there might be in what they are doing. Does that make some sense?”

    Nat, how in the hell do you want to scientifically and rationally solution a problem and thus help others in their endeavors, if, instead, you drive them toward doubt? It just happens that several scientific and other disciplines provide satisfactory answers to your problem of information cascade, as I demonstrated it in my previous comment. Thus we obtained the answer we were expecting or, at least, we knew which branches and specialties were likely to solve this kind of problem.

    With regards to your education, works, and proessional experience, as scientist your role is to provide answers; not questions…

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •