Show simple item record

dc.contributor.authorPattanayak, Sayantica
dc.description.abstractEmerging neural networks based machine learning techniques such as deep learning and its variants have shown tremendous potential in many application domains. However, the neural network models raise serious privacy concerns due to the risk of leakage of highly privacy-sensitive data. In this dissertation, we propose various techniques to hide the sensitive information and also evaluate the performance and efficacy of our proposed models. In our first research work we propose a model, which can both encrypt and decrypt a ciphertext. Our model is based on symmetric key encryption and back propagation neural network. Our model takes the decimal values and converts them to ciphertext and then again to decimal values. In our second research work, we propose a remote password authentication scheme using neural network. In this model, we have shown how an user can communicate securely with more than one server. A user registers himself / herself with a trusted authority and gets a user id and a password. The user uses the password and the user id to login to one or multiple servers. The servers can validate the legitimacy of the user. Our experiments use different classifiers to evaluate the accuracy and the efficiency of our proposed model. In our third research paper, we develop a technique to securely send patient information to different organizations. Our technique used different fuzzy membership functions to hide the sensitive information about patients. In our fourth research paper, we introduced an approach to substitute the sensitive attributes with the non-sensitive attributes. We divide the data set into three different subsets: desired, sensitive and non-sensitive subsets. The output of the denoising autoencoder will only be the desired and non-sensitive subsets. The sensitive subsets are hidden by the non-sensitive subsets. We evaluate the efficacy of our predictive model using three different flavors of autoencoders. We measure the F1-score of our model against each of the three autoencoders. As our predictive model is based on privacy, we have also used a Generative Adversarial Neural Network (GAN), which is used to show to what extend our model is secure.en_US
dc.publisherNorth Dakota State Universityen_US
dc.rightsNDSU policy 190.6.2en_US
dc.titleAddressing Challenges in Data Privacy and Security: Various Approaches to Secure Dataen_US
dc.typeDissertationen_US
dc.date.accessioned2022-06-07T14:00:31Z
dc.date.available2022-06-07T14:00:31Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/10365/32687
dc.rights.urihttps://www.ndsu.edu/fileadmin/policy/190.pdfen_US
ndsu.degreeDoctor of Philosophy (PhD)en_US
ndsu.collegeEngineeringen_US
ndsu.departmentComputer Scienceen_US
ndsu.programComputer Scienceen_US
ndsu.advisorLudwig, Simone


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record