The aim of the current project is to research ways of dynamically “crawling” the Internet to collect enough information to be able to make a confident conclusion of whether a particular wep-page or blog is spam. This is proposed to be done by continually maintaining a `backbone' subset of the set of all web pages […]
Read More