In today's post, we look at Two mode networks as a social network method of study from Hanneman lectures. Brieger (1974) first highlighted the dual focus on social network analysis on how individuals, by their agency create social structure and at the same time those structures impose constrains and shapes the behavior of the individuals embedded in them. Social network analysis measure the relations at micro level and use it to infer the presence of structure at the macro level. For example, the ties of individuals (micro) allow us to infer the cliques (macro).
Davis study showed that there can be different levels of analysis. This study finds ties between actors and events and as such is not membership to a clique but affiliations. By seeing which actors participate in which events, we can infer the meaning of the event by the affiliations of the actors while seeing the influence of the event on the choices of the actors.
Further, we can see examples of this macro-micro social structure at different levels. This is referred to as nesting where individuals are part of a social structure and the structure can be part of a larger structure. At each level of the nesting, there's tension between the structure and the agency i.e macro and micro group.
There are some tools to examine this two-mode data. It involves finding both qualitative and quantitative patterns. If we take an example where we look at the contributions of donors to campaigns supporting and opposing ballot initiatives over a period of time, our data set has two modes - donors and initiatives. A binary data for whether there was a contribution or not could describe what a donor did. A valued data could describe the relations between donors and initiatives using a simple ordinal scale.
A rectangular matrix of actors (rows) and events(columns) could describe this dual mode data.
This could then be converted into two one mode data sets where we measure the strength of ties between actors by the number of times they contributed to the same side of initiatives and where we measure the initiative by initiative ties where we measure the number of donors that each pair of initiatives had in common.
To create actor by actor relations, we could use a cross -product method that takes entry of the row for actor A and multiplies it with that of actor B and then sums the result. This gives an indication of co-occurrence and works well with binary data where each product is 1 only when both actors are present.
Instead of the cross-product, we could also take the minimum of the two values which goes to say the tie is the weaker of the ties of the two actors to the event.
Two mode data are sometimes stored in a second way called the bipartite matrix. A bipartite matrix is one where the same rows as in the original matrix are added as additional columns and the same columns as in the original matrix are added as additional rows. Actors and events are being treated as social objects at a single level of analysis.
This is different from a bipartite graph also called a digraph which is a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent. By adjacent, we mean vertices joined by an edge. In the context of word similarity extractions, we used terms and their N-gram contexts as the two partites and used random walks to connect them.
I will cover random walks in more detail.
Davis study showed that there can be different levels of analysis. This study finds ties between actors and events and as such is not membership to a clique but affiliations. By seeing which actors participate in which events, we can infer the meaning of the event by the affiliations of the actors while seeing the influence of the event on the choices of the actors.
Further, we can see examples of this macro-micro social structure at different levels. This is referred to as nesting where individuals are part of a social structure and the structure can be part of a larger structure. At each level of the nesting, there's tension between the structure and the agency i.e macro and micro group.
There are some tools to examine this two-mode data. It involves finding both qualitative and quantitative patterns. If we take an example where we look at the contributions of donors to campaigns supporting and opposing ballot initiatives over a period of time, our data set has two modes - donors and initiatives. A binary data for whether there was a contribution or not could describe what a donor did. A valued data could describe the relations between donors and initiatives using a simple ordinal scale.
A rectangular matrix of actors (rows) and events(columns) could describe this dual mode data.
This could then be converted into two one mode data sets where we measure the strength of ties between actors by the number of times they contributed to the same side of initiatives and where we measure the initiative by initiative ties where we measure the number of donors that each pair of initiatives had in common.
To create actor by actor relations, we could use a cross -product method that takes entry of the row for actor A and multiplies it with that of actor B and then sums the result. This gives an indication of co-occurrence and works well with binary data where each product is 1 only when both actors are present.
Instead of the cross-product, we could also take the minimum of the two values which goes to say the tie is the weaker of the ties of the two actors to the event.
Two mode data are sometimes stored in a second way called the bipartite matrix. A bipartite matrix is one where the same rows as in the original matrix are added as additional columns and the same columns as in the original matrix are added as additional rows. Actors and events are being treated as social objects at a single level of analysis.
This is different from a bipartite graph also called a digraph which is a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent. By adjacent, we mean vertices joined by an edge. In the context of word similarity extractions, we used terms and their N-gram contexts as the two partites and used random walks to connect them.
I will cover random walks in more detail.
No comments:
Post a Comment