dc.description.abstract | The popularity of online social networks has brought up new privacy threats. These threats often arise after users willingly, but unwittingly reveal their information to a wider group of people than they actually intended. Moreover, the well adapted “friends-based” privacy control has proven to be ill-equipped to prevent dynamic information disclosure, such as in user text posts. Ironically, it fails to capture the dynamic nature of this data by reducing the problem to manual privacy management which is time-consuming, tiresome and error-prone task. This dissertation identifies an important problem with posting on social networks and proposes a unique two phase approach to the problem. First, we suggest an additional layer of security be added to social networking sites. This layer includes a framework for natural language to automatically check texts to be posted by the user and detect dangerous information disclosure so it warns the user. A set of detection rules have been developed for this purpose and tested with over 16,000 Facebook posts to confirm the detection quality. The results showed that our approach has an 85% detection rate which outperforms other existing approaches. Second, we propose utilizing trust between friends as currency to access dangerous posts. The unique feature of our approach is that the trust value is related to the absence of interaction on the given topic. To approach our goal, we defined trust metrics that can be used to determine trustworthy friends in terms of the given topic. In addition, we built a tool which calculates the metrics automatically, and then generates a list of trusted friends. Our experiments show that our approach has reasonably acceptable performance in terms of predicting friends’ interactions for the given posts. Finally, we performed some data analysis on a small set of user interaction records on Facebook to show that friends’ interaction could be triggered by certain topics. | en_US |