opening it up with Common Lisp
Book review: Darwinia
Summer reading: Spin
the Omnivoire's Delimma
the Golem's Eye
Growing scale-free networks with tunable clustering
This is yet another network generation paper. As you may recall, the Barabasi-Albert model provides scale-free distributions of the vertex degree (i.e., it's a power law: a few vertexes have a huge number of edges, lots of vertexes have many edges and bazillions of vertexes have just a few edges) and the Watts Strogatz model gives high clustering coefficients (friends of my friends are also often friends) but neither gives both.
Here, Holme and Kim start with the Barabasi-Albert model and add a new triad-formingstep. This makes sense: if you the want the final graph to have more triples, then ensure that more triples are added during graph generation! The exciting thing is that not only do you get the triples (and therefore a tunable clustering coefficient) but you still get the power-law degree distribution.
The analysis in this paper is relatively light-weight but I actually enjoyed that (I'm a computer scientist (heh, heh), not a statistical physicist). It takes a nice idea, elaborates it, shows that it works and wraps it up. Nice.
Copyright -- Gary Warren King, 2004 - 2006