Update from Freenet
Posted to the Freenet announce group. One of my favorite projects is moving along again. --Ari
-----------------------------
People could be forgiven for thinking that the project had somewhat stagnated given the lack of activity on these mailing lists, so I wanted to provide an update because this could hardly be further from the truth.
Oskar Sandberg, Matthew, and I have been developing some ideas for 0.7 which represent an even more fundamental architectural shift than have been proposed to-date, and which should address one of the most fundamental shortcomings of Freenet as it relates to Freenet's usage in a hostile environment, and which I believe represents a significant new innovation in the P2P-space.
As most people will be aware, Oskar was one of the core Freenet developers in the first few years of the project. He is now working on a PhD in Mathematics. Over the past few months he and I have been collaborating on gaining a much deeper mathematical understanding of how Freenet does what it does. While this work is far from complete, it has given us some extremely useful insights and much more confidence in determining what aspects of Freenet's design work well, which don't, and why.
To understand the new idea, I should start with some theoretical background. Consider a simple "graph". A graph in the mathematical sense consists of a set of nodes, some of which are connected to each-other. At this stage nodes don't have a position in space, all we know or care about them is which nodes are connected to each-other. We can assume that connections are bi-directional.
The "diameter" of a graph is the minimum number of nodes you must go through to get from any one particular node to any other particular node in the graph. Note that it may not be easy to find this path, but the important thing is that it exists.
There is a mathematical result which tells us what kind of graphs have a small diameter. Basically imagine we have three nodes, A is connected to B, and A is also connected to C. The mathematical result says that if, given that both are connected to A, there is an increased probability that B is connected to C, then the graph will have a small diameter.
So, if we have a graph that has this property then we know that we *can* get from any one node to another in a small number of steps, but we don't necessarily know *how*.
Now imagine that each node in the graph has a position in space, this can be 1 dimensional, 2 dimensional, 20 dimensional space, it doesn't matter too much. Imagine that we want to get from one particular node in this graph to another particular node. A simple approach is, from our starting node, go to whichever node we are connected to is closest to the node we want to get to. This approach will work quickly in a graph that is a "small world". In essence, a small world graph is where there is a higher probability that nodes which are close together are connected than nodes which are far apart.
In the ideal case, the probability that two nodes are connected is proportional to 1/(d^n) where d is the distance between them, and n is the number of dimensions in the space in which our nodes reside. This mathematical result is due to Kleinberg.
A small-world graph therefore not only has a small diameter, but provides an efficient means to find it.
Anyway, back to the story. One of Freenet's weaknesses in terms of its usefulness in a hostile environment, is that while its goal is to make it very difficult to determine who is publishing and reading what, it doesn't make it all that difficult to determine who is running a Freenet node. This could be problematic in a situation where the act of running a Freenet node is itself sufficient to incur the wrath of one's oppressors. Those oppressors can just harvest Freenet node addresses one by one, it wouldn't be easy, but it wouldn't be impossible either.
The only real way to address this is to limit the nodes your Freenet node talks to to people whom you are confident are not working on behalf of your oppressor. In effect this is the same idea employed by "darknets", but the problem with darknets is that they don't scale. We are talking about a darknet that could potentially scale to millions of users.
Initially it was felt that NGR would allow us to do this, but when, based on Oskar's and my research, we realised how fundamental the network topology is to the small world principal, we felt that we wouldn't be able to maintain a small world link structure if we weren't able to automatically create links to new nodes.
But then we remembered that human relationships already tend to form small-world networks (this is the origin of the whole small world idea), this was a key realisation. If we have a network created by linking people who know each-other, we know that the network will have a small diameter, simply because that is how human relationships work (ie. if I know John, and I know Fred, then there is a greater probability that John knows Fred than of two randomly selected people knowing each-other).
But, for a network of low-diameter to be useful, we need to turn it into a small world network, and this means each node in the network needs a "position" in space such that Kleinberg's topology holds. Our first thought was that NGR might be able to do this. Matthew has been running some experiments to test this theory, and results so-far have been promising but not yet conclusive.
Meanwhile Oskar and I have been testing some alternative algorithms for achieving this should NGR fail to do what we need it to do. We have also had some success in this regard, but again, nothing conclusive yet.
Assuming we can find a good algorithm for assigning positions or "identities" to nodes, we are in a pretty good situation as we can now efficiently route requests for data in our globally scalable darknet.
Anyway, that is where things stand right now - as can be seen it is still a work in progress, but I hope people agree that this is an exciting and promising new direction for Freenet that remains true to Freenet's ultimate goals. If successful, Freenet will be a globally scalable darknet and will be resistant to a whole class of "harvesting" attacks.
Lastly, however, I need to point out that our current funding situation is not healthy, as anyone can see by looking at the website. As a result, if anyone out there wants to support this effort, and many already have, then please visit the website and make a donation so that Matthew doesn't starve while he works on this stuff.
All the best,
Ian.
--
Founder, The Freenet Project http://freenetproject.org/
CEO, Cematics Ltd http://cematics.com/
Personal Blog http://locut.us/~ian/blog/
-----------------------------
People could be forgiven for thinking that the project had somewhat stagnated given the lack of activity on these mailing lists, so I wanted to provide an update because this could hardly be further from the truth.
Oskar Sandberg, Matthew, and I have been developing some ideas for 0.7 which represent an even more fundamental architectural shift than have been proposed to-date, and which should address one of the most fundamental shortcomings of Freenet as it relates to Freenet's usage in a hostile environment, and which I believe represents a significant new innovation in the P2P-space.
As most people will be aware, Oskar was one of the core Freenet developers in the first few years of the project. He is now working on a PhD in Mathematics. Over the past few months he and I have been collaborating on gaining a much deeper mathematical understanding of how Freenet does what it does. While this work is far from complete, it has given us some extremely useful insights and much more confidence in determining what aspects of Freenet's design work well, which don't, and why.
To understand the new idea, I should start with some theoretical background. Consider a simple "graph". A graph in the mathematical sense consists of a set of nodes, some of which are connected to each-other. At this stage nodes don't have a position in space, all we know or care about them is which nodes are connected to each-other. We can assume that connections are bi-directional.
The "diameter" of a graph is the minimum number of nodes you must go through to get from any one particular node to any other particular node in the graph. Note that it may not be easy to find this path, but the important thing is that it exists.
There is a mathematical result which tells us what kind of graphs have a small diameter. Basically imagine we have three nodes, A is connected to B, and A is also connected to C. The mathematical result says that if, given that both are connected to A, there is an increased probability that B is connected to C, then the graph will have a small diameter.
So, if we have a graph that has this property then we know that we *can* get from any one node to another in a small number of steps, but we don't necessarily know *how*.
Now imagine that each node in the graph has a position in space, this can be 1 dimensional, 2 dimensional, 20 dimensional space, it doesn't matter too much. Imagine that we want to get from one particular node in this graph to another particular node. A simple approach is, from our starting node, go to whichever node we are connected to is closest to the node we want to get to. This approach will work quickly in a graph that is a "small world". In essence, a small world graph is where there is a higher probability that nodes which are close together are connected than nodes which are far apart.
In the ideal case, the probability that two nodes are connected is proportional to 1/(d^n) where d is the distance between them, and n is the number of dimensions in the space in which our nodes reside. This mathematical result is due to Kleinberg.
A small-world graph therefore not only has a small diameter, but provides an efficient means to find it.
Anyway, back to the story. One of Freenet's weaknesses in terms of its usefulness in a hostile environment, is that while its goal is to make it very difficult to determine who is publishing and reading what, it doesn't make it all that difficult to determine who is running a Freenet node. This could be problematic in a situation where the act of running a Freenet node is itself sufficient to incur the wrath of one's oppressors. Those oppressors can just harvest Freenet node addresses one by one, it wouldn't be easy, but it wouldn't be impossible either.
The only real way to address this is to limit the nodes your Freenet node talks to to people whom you are confident are not working on behalf of your oppressor. In effect this is the same idea employed by "darknets", but the problem with darknets is that they don't scale. We are talking about a darknet that could potentially scale to millions of users.
Initially it was felt that NGR would allow us to do this, but when, based on Oskar's and my research, we realised how fundamental the network topology is to the small world principal, we felt that we wouldn't be able to maintain a small world link structure if we weren't able to automatically create links to new nodes.
But then we remembered that human relationships already tend to form small-world networks (this is the origin of the whole small world idea), this was a key realisation. If we have a network created by linking people who know each-other, we know that the network will have a small diameter, simply because that is how human relationships work (ie. if I know John, and I know Fred, then there is a greater probability that John knows Fred than of two randomly selected people knowing each-other).
But, for a network of low-diameter to be useful, we need to turn it into a small world network, and this means each node in the network needs a "position" in space such that Kleinberg's topology holds. Our first thought was that NGR might be able to do this. Matthew has been running some experiments to test this theory, and results so-far have been promising but not yet conclusive.
Meanwhile Oskar and I have been testing some alternative algorithms for achieving this should NGR fail to do what we need it to do. We have also had some success in this regard, but again, nothing conclusive yet.
Assuming we can find a good algorithm for assigning positions or "identities" to nodes, we are in a pretty good situation as we can now efficiently route requests for data in our globally scalable darknet.
Anyway, that is where things stand right now - as can be seen it is still a work in progress, but I hope people agree that this is an exciting and promising new direction for Freenet that remains true to Freenet's ultimate goals. If successful, Freenet will be a globally scalable darknet and will be resistant to a whole class of "harvesting" attacks.
Lastly, however, I need to point out that our current funding situation is not healthy, as anyone can see by looking at the website. As a result, if anyone out there wants to support this effort, and many already have, then please visit the website and make a donation so that Matthew doesn't starve while he works on this stuff.
All the best,
Ian.
--
Founder, The Freenet Project http://freenetproject.org/
CEO, Cematics Ltd http://cematics.com/
Personal Blog http://locut.us/~ian/blog/
0 Comments:
Post a Comment
<< Home