Fall 2007 Talk Series on

Networks and Complex Systems

Every Monday 6-7p, Wells Library 001 ~ Optional Dinner at at Lennie's Afterwards

Description
This talk series is open to all Indiana University faculty and students interested in network analysis, modeling, visualization, and complex systems research.

A major intent is to cross-fertilize between research done in the social and behavioral sciences, research in natural sciences such as biology or physics, but also research on Internet technologies.

Links to people, projects, groups, students, courses and news related to complex systems and networks research at Indiana University are also available via the CSN web site.

Organizer
Katy Börner <katy@indiana.edu> Associate Professor of Information Science, SLIS, IUB.

Time & Place
Every Monday 6:00-7:00pm in the Wells Library (formerly Main Library) at Indiana University, Bloomington, Room 001. Right after the Cognitive Science Colloquium Series. There is an optional dinner afterwards 7-9p at Lennie's.

Credit
Students interested to attend the talks for credit need to register for L600 (1 credit) with Katy Börner. Proposal form is here. Grading will be based on the attendance of 8 talks (sign-up sheets will be provided) and a 4-5 page write-up that synergizes/aggregates major points made by a subset of the speakers to be submitted at the end of the semester.

Previous Talks
Fall 2004 | Spring 2005 | Fall 2005 | Spring 2006 | Fall 2006 | Spring 2007

Evolving list of recommended readings. See also the Wikipedia entries on graph theory, small world networks, power law, and complex networks, and self organizing systems.

Related series
Cambridge Colloquium on Complexity and Social Networks organized by Davin Lazer at Harvard.
The Age of Networks speaker series organized by Noshir Contractor, UIUC & NCSA.

09/03 Faculty at Indiana University, Bloomington (IUB)

materials iconmaterials iconOverview of Network and Complex Systems Courses at IUB in Fall 2007

Other Related Courses that might NOT be taught in Fall 2007:

 

09/04John M. Beggs, Biocomplexity talk on “A Phase Transition in Cortical Slice Networks”,   4-5 pm, Swain West 238

09/10Steven Myers, Informatics, Indiana University

materials iconmaterials iconWireless Router Insecurity: The Next Crimeware Epidemic

Abstract: The widespread adoption of home routers by the general public has added a new target for malware and crimeware authors. A router's ability to manipulate essentially all network traffic coming in to and out of a home, means that malware installed on these devices has the ability to launch powerful Man-In-The-Middle (MITM) attacks, a form of attack that has previously been largely ignored. Making matters worse, many homes have deployed wireless routers which are insecure if the attacker has geographic proximity to the router and can connect to it over its wireless channel. However, some have downplayed this risk by suggesting that attackers will be unwilling to spend the time and resources necessary, nor risk exposure to attack a large number of routers in this fashion. In this talk, we will consider the ability of malware to propagate from wireless router to wireless router over the wireless channel, infecting large urban areas where such routers are deployed relatively densely. We develop an SIR epidemiological model, and use it to simulate the spread of malware over major metropolitan centers in the US. Using hobbyist collected wardriving data from Wigle.net and our model, we show the potential for the infection of tens of thousands of routers in short periods of time is quite feasible. We consider simple prescriptive suggestions to minimize the likelihood that such attacks are ever performed. Next, we show a simple yet worrisome attacks that can easily and silently be performed from infected routers. We call this attack 'Trawler Phishing'. The attack generalizes a well understood failure of many web-sites to properly implement SSL, and allows attackers to harvest credentials from victims over a period of time, without the need to use spamming techniques or mimicked, but illegitimate web-sites, as in traditional phishing attacks, bypassing the most effective phishing prevention technologies. Further, it allows attackers to easily form data-portfolios on many victims, making collected data substantially more valuable. We consider prescriptive suggestions and countermeasure for this attack.
The work on epidemiological modeling is joint work with Hao Hu, Vittoria Colizza and Alex Vespignani. The work on trawler phishing is joint work Sid Stamm.

09/17 Beth Plale, Computer Science, IUB

materials iconmaterials iconMetadata, Provenance, and Search in e-Science

Abstract: Computational science investigations carried out through cyberinfrastructure frameworks are capable of generating quantities of data far larger and more tightly related than previous hand-driven techniques.    For this data to be useful in other applications within the domain science or across multiple science domains, or just be useful and accessible through time, it must be described by metadata.  Both syntatical, or lower level metadata, and semantic, or higher level metadata are important for reconstruction. A data product's provenance or derivation history is key to ascertaining such attributes as a data products quality.   We argue that the best time and place to gather the metadata and provenance is closest to the source of generation of a dataset because that is where the most knowledge is.  In this talk we discuss metadata, provenance, and search in cyberinfrastructure-driven computational science.  Most communication about data products, from our experience, in computational science, uses XML. We discuss a solution to metadata storage in which a metadata catalog standing separate from the storage system on which the products reside provides rich domain-friendly communication with other components of the cyberinfrastructure. We examine provenance collection for workflow systems and data streaming, and tie that both to missing data in data streams through Kalman Filters and data quality through a data quality model.   Finally we discuss current efforts to integrate cyberinfrastructure-driven computational science and digital repositories through provenance.

09/24 Daniel A. Reed, Chancellor's Eminent Professor, Renaissance Computing Institute, Senior Advisor for Strategy and Innovation, University of North Carolina at Chapel Hill

DIFFERENT ROOM - RTV 245

materials iconmaterials iconInventing the Future

Abstract: Ten years – a geological epoch on the computing time scale. Looking back, a decade brought the web and consumer email, digital cameras and music, broadband networking, multifunction cell phones, WiFi, HDTV, telematics, multiplayer games, electronic commerce and computational science. It also brought spam, phishing, identity theft, software insecurity, outsourcing and globalization, information warfare and blurred work-life boundaries. What will a decade of technology advances bring in communications and collaboration, sensors and knowledge management, modeling and discovery, electronic commerce and digital entertainment, critical infrastructure management and security?
Prognostication is always fraught with challenges, especially when predicting the effects of exponential change. Aggressively inventing the future, based on perceived needs and opportunities, is far more valuable. As Daniel Burnham famously remarked, “Make no little plans, they have no power to fire men's spirits.” In this presentation, we present some visions of a technology-enriched future, driven by emerging technologies and by national and international policies and competitive strategies. We also discuss their implications for university futures, in a rapidly changing world.

10/01 John M. Beggs, Physics, IUB

materials iconmaterials icon A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro

Abstract: Multi-neuron firing patterns are often observed, yet are predicted to be rare by models that assume independent firing. To explain these correlated network states, two groups recently applied a second-order maximum entropy model that used only observed firing rates and pairwise interactions as parameters (Schneidman et al., 2006; Shlens et al., 2006). Interestingly, with these minimal assumptions they predicted 90-99% of network correlations. If generally applicable, this approach could vastly simplify analyses of complex networks. However, this initial work was done largely on retinal tissue, and its applicability to cortical circuits is unknown. This work also did not address the temporal evolution of correlated states. To investigate these issues, we applied the model to multielectrode data containing spontaneous spikes or local field potentials from cortical slices and cultures. The model worked slightly less well in cortex than in retina, accounting for 88% ± 7% (mean ± s.d.) of network correlations. In addition, in 8/13 preparations the observed sequences of correlated states were significantly longer than predicted by concatenating states from the model. This suggested that temporal dependencies are a common feature of cortical network activity, and should be considered in future models. We found a significant relationship between strong pairwise temporal correlations and observed sequence length, suggesting that pairwise temporal correlations may allow the model to be extended into the temporal domain. We conclude that while a second-order maximum entropy model successfully predicts correlated states in cortical networks, it should be extended to account for temporal correlations observed between states.

10/08 Benjamin Martin, Joseph Cottam, and Chris Mueller, CS, IUB

materials iconmaterials iconVisual  Similarity Matrices

Abstract: Matrix representations of graphs provide a useful alternative and  supplement to other graph drawing methods. In matrix representations  matrix orderings take the place of graph layouts and have a critical  impact on the usefulness of the resulting matrix. We consider some  factors that may make one algorithm better than another, and examine  some ordering algorithms in light of these factors. We also consider  the problem of interpreting, from a qualitative perspective, matrix  based representations produced by some of these algorithms. Finally,  we present some on-going research regarding breadth-first search  ordered matrices of small-world graphs based on quantitative analysis  of the resulting ordered matrices.

10/15 Faculty and Students, Indiana University, Bloomington

Annual Open House, 4-6p

Abstract: Open your laptops and demo your software. Bring posters to introduce your research questions and results. So far, the following posters and demo's are planned:

10/22 Marco Janssen, School of Human Evolution and Social Change and School of Computing and Informatics, Arizona State University

materials iconChanging the rules of the game: experiments with humans and virtual agents

Abstract: Many resource problems can be classified as commons dilemmas, a dilemma between the interest of the individual and the interest of the group as a whole. During the last decades substantial progress has been made in understanding how people can avoid the tragedy of the commons. However, we lack good understanding how people change institutional arrangements over time in an effective way in an environment with dynamic resources. I will discuss the initial results of a project where we look at innovation of institutional arrangements in common pool resource management where we combine laboratory and field experiments with agent-based modeling. In laboratory experiments groups share resources in a dynamic spatially explicit virtual environment, while the pencil and paper field experiments in Colombia and Thailand include various types of resources (fishery, foresty and irrigation). Using the individual level data derived from the experiments we develop and test agent-based models to derive better understanding of the experimental data. We also use the agent-based models to explore the evolution of institutional rules in various contexts that we could not (yet) experiment with. Going back and forth between experiments with humans and virtual agents is a fruitful way to develop empirically-based agent-based models. I will discuss methodological challenges experienced in this project as well as initial results of the various models.
Movie of the experiment can be found at http://csid.asu.edu/movies/csan.mov.

10/29 Joe Futrelle, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign

materials iconmaterials iconThe Way Things Go: Provenance, Semantic Networks, and Systems-Scale Science

Abstract: Like many other complex human endeavors, scientific work is a decentralized, heterogeneous activity spanning organizational, disciplinary, technical boundaries. As science begins to address large-scale systems, the growing complexity of scientific work processes requires new infrastructure for understanding and managing the production of knowledge from distributed observation, simulation, analysis, and discourse activities. Cyberenvironments extend existing science application capabilities to include the ability to record, analyze, and interpret provenance documentation describing the causal relationships between processes and artifacts in scientific work. Using provenance-enabled collaboration and analysis tools, scientists can efficiently assess, validate, reproduce, and refine experiments and results. Provenance documentation enriches the scientific research record, enabling significant results to be preserved along with much of the associated information necessary to correctly interpret them. NCSA's suite of prototype Cyberenvironment tools is based around the idea of semantic content networks and built around the World Wide Web Consortium's Resource Description Framework (RDF). RDF provides an application and domain-neutral way to represent metadata, and can thus be used to link domain-specific information with generic vocabularies for describing artifacts and work processes. NCSA's work in the Grid Provenance Challenge, for instance, has demonstrated the applicability of RDF to representing scientific workflow executions, enabling data products to be linked via RDF to descriptions of the complex processes that produced them. The emerging Open Provenance Model attempts to further abstract the notion of causal relationships in scientific and other work processes, allowing provenance-enabled tools to link independently-observed processes together form descriptions of larger-scale processes. The scientific research record can then be understood as a semantic network of causality and thus be linked with other relevant networks, such as social networks, to provide a comprehensive model of scientific work that can be applied to new communities to build powerful science Cyberenvironments that maximize the impact of collaborative, systems-scale scientific work.

11/05 Tom Evans, Department of Geography & Associate Director of the Center for the Study of Institutions, Population and Environmental Change (CIPEC), IUB

materials iconmaterials iconLand Use Decision-Making and Landscape Outcomes

Abstract: Historical trajectories of land cover change in developed countries have provided the basis for a theory of forest transition. To briefly summarize, Forest Transition Theory (FTT) suggests that nations experience dramatic deforestation during a frontier period of heavy resource use and this deforestation phase is eventually followed by a period of reforestation after some period of economic development. A considerable amount of research has focused on the drivers of deforestation but we have a less complete understanding of the diverse factors contributing to reforestation and the prospects for a transition from deforestation to reforestation in different economies. These forest cover trajectories are the result of interactions between social and ecological processes operating at multiple spatial and temporal scales and there are numerous methodological approaches that have been used to examine the complexity in these coupled social-ecological systems. This presentation summarizes findings to date from research examining the role of landuse decision-making in land cover change in the Midwest United States, Brazil and Laos. Results are presented from the integration of agent-based models of land cover change and empirical data drawn from social surveys and remotely sensed data (aerial photography and satellite imagery). Findings from spatially explicit experimental work are also discussed that address the role of landowner heterogeneity and how management activities from diverse local-level actors result in complex macro-scale outcomes.

11/12 Larry Yaeger, Informatics, IUB (work done with Olaf Sporns and Virgil Griffith)

materials iconmaterials icon Evolution selects for complexity, but only when complexity is of evolutionary value

Abstract: There has been a long-standing debate as to whether there exists any kind of "arrow of complexity" due to the action of natural selection. Some early scientists and philosophers reasoned that there must be, based on the paleontological record.  Some object to a potential anthropocentric chauvinism to the interpretation of complexity in these records.  Others, notably McShea and Gould, suggest a distinction between "driven" and "passive" selection for complexity, where the former corresponds to an active, non-random process biased towards increasing complexity, while the latter corresponds to a random process of diffusion away from a lower bound of complexity. Attempts to distinguish between driven and passive selection in the fossil record have met with mixed results, providing evidence for both conclusions.  I will describe an experiment using a computer model of an evolving ecology in which agent behaviors are driven by artificial neural networks, the architecture of which is the primary subject of either natural selection or a random diffusion process. An information theoretic measure of the complexity of the resulting neural dynamics allows an investigation of the distinction between driven and passive selection.  The results of this study suggest a simple explanation for the variability in biological studies of evolutionary trajectories, which is that under certain conditions evolution does indeed select for complexity, in a driven fashion, faster than one would see with a purely random, diffusive process. But this is only the case when additional complexity confers an evolutionary advantage on the affected agents.  Under other conditions, natural selection for "good enough" solutions can reduce complexity growth relative to that observed in a passive, randomly diffusing system, even when all other contributing factors are held constant.  Thus evolutionary complexity is neither entirely driven nor entirely passive, in McShea's sense, but is unavoidably a blend of the two forces, depending in a reasonably intuitive fashion on the evolutionary value of incremental gains in complexity.

11/19 Sasha Barab, Learning Sciences, Instructional Systems Technology & Cognitive Science and Adam Ingram-Goble, Doctoral Student in Learning Sciences, School of Education, IUB

materials iconmaterials iconConceptual Play Spaces: Designing Games for Learning

Abstract: In this presentation, we will discuss our framework for designing play spaces to support learning academic content. While commercial games do not focus on the learning of academic content, it is quite possible to design one that does. Reflecting on our four years of design experience around developing conceptual play spaces, we provide guidelines for educators to think through what it would mean to design a game for supporting learning. Conceptual play is a state of engagement that involves (a) projection into the role of a character who, (b) engaged in a partly fantastical problem context, (c) must apply conceptual understandings to make sense of and, ultimately, transform the context. Reflecting on our four years of design experiences centered on the development of conceptual play spaces, we provide designers with anchor points and examples for thinking through what it would mean to design a game for supporting learning. This discussion will be situated in the context of our Quest Atlantis project.

Quest Atlantis is a standards-based online 3-D learning environment that transports students to virtual places to teach a wide variety of subjects, such as language arts, mathematics, the sciences, geography/social studies, and the arts while building digital age competencies and fostering a disposition to improve the world (see www.QuestAtlantis.org).  This program was created by Dr. Sasha Barab, the Barbara B. Jacobs Chair in Education & Technology, and other collaborators from various departments at Indiana University. Quest Atlantis has been developed with substantial funding from the National Science Foundation, John D. and Catherine T. MacArthur Foundation, and National Aeronautics and Space Administration. It is the leading example of a new game-based curriculum. Our goal here is to both communicate the potential value of conceptual play spaces, and to provide an illuminative set of cases such that others might draw out lessons as they build their own.

11/26 Bennett Bertenthal, Dean of the College of Arts and Sciences, IUB

materials iconmaterials iconGrid and Network Services for Storing, Annotating, and Searching Streaming Data

Abstract: The Social Informatics Data Grid is a new infrastructure designed to transform how social and behavioral scientists collect and annotate data, collaborate and share data, and analyze and mine large data repositories. An important goal of the project is to be compatible with existing databases and tools that support the sharing, storage and retrieval of archival data sets. It is built on web and grid services to enable transparent access to data and analysis resources from anywhere and to leverage new and emerging web-based technologies created by a large and growing community of developers around the world. At the heart of the SIDGrid design is a rich data model that captures notions of time, data streams, and semi-structured data attached to these streams to enable powerful manipulations of multimodal data spread across data resources. Through query and analysis services deployed against the data warehoused in the SIDGrid users can perform new classes of experiments. Shared data resources available from anywhere over the Web introduces new capabilities to the process of collection and analysis of data – collaborative annotation among them – without relinquishing control over sensitive data via an embedded security model. This project is still in the development phase and feedback from user communities is essential for determining which functions are most important and should be developed next.

12/03 Ray Burke and Alex Leykin, Kelley School of Business, IUB

materials iconmaterials icon Automated Customer Tracking and Behavior Recognition

Abstract: The retail context has an impact on consumer behavior that goes beyond product assortment, pricing, and promotion issues.  It affects the time consumers spend in the store, how they navigate through the aisles, and how they allocate their attention and money across departments and categories. Unfortunately, conventional research techniques provide limited insight into the dynamics of shopper behavior.
The presentation will discuss new computational approaches for determining the location, path, and behavior of customers in retail stores using video images collected from ceiling-mounted surveillance cameras.  The tracking process involves several stages of analysis:  (1) segmenting the moving foreground regions from the relatively static background image using a statistical model based on the codebook approach, (2) estimating the positions of shoppers in the camera view by using a vertical projection histogram, (3) converting these camera coordinates into the x/y locations of shoppers on a store floor plan using a model of the camera's viewpoint, (4) identifying and tracking individual shoppers across frames using a Bayesian particle-filter model, and (5) identifying groups of shoppers by clustering motion trajectories.  The authors will discuss data and measurement issues associated with collecting accurate tracking information, classifying customers into shopper groups, analyzing patterns of shopper behavior, and differentiating between sales associates and consumers.  Validation results and example applications will be presented.

12/10 Weixia (Bonnie) Huang and the NWB Team at IUB

materials iconmaterials iconNetwork Workbench Workshop

Abstract: This two hour workshop will present and demonstrate the Network Workbench (NWB) Tool, the Community Wiki, and the Cyberinfrastructure Shell developed in the NSF funded Network Workbench project.

The workshop will present the overall structure, implementation, as well as a demo for potential developers and users.
We would like to acknowledge the NWB team members that made major contributions to the NWB tool and/or Community Wiki: Santo Fortunato, Katy Börner, Alex Vespignani, Soma Sanyal, Ramya Sabbineni, Vivek S. Thakre, Elisha Hardy, and Shashikant Penumarthy.