Academics and Research

I am a second year master's student in Computer Science at Brown University. I am interested in cryptography and security, machine learning, data science, and everything at their intersection. I have been taking courses such as Deep Learning (CS1470), Introduction to Cryptography and Computer Security (CS1510), Computer Vision (CS1430), and Data Science (CS1951-A) to understand these topics in depth. I am also the Head TA for the first iteration of the course 'Introduction to Computer Security' (CS1880) at Brown University.

Prior to this, I completed my undergraduate thesis under the supervision of Prof. Mahavir Jhawar. For my thesis, I studied and implemented privacy-preserving linear regression in two frameworks, namely, SecureML(2017) in the 2-server setting and BLAZE(2020) in the 3-server setting. For more information, check the publications section below.

In addition to this, I have been working with Prof. Debayan Gupta on privacy-preserving neural networks and privacy-preserving federated learning. I have also been studying attacks and attack mitigation in Bitcoin under Prof. Alptekin Küpçü, Koç University, Turkey.

Select Publications

Abstract
Since its proposal by Eyal and Sirer (CACM '13), selfish mining attack on proof-of-work blockchains has been studied extensively in terms of both improving its impact and defending against it. Before any defense is deployed in a real world blockchain system, it needs to be tested for security and dependability. However, real blockchain systems are too complex to conduct any test on or benchmark the developed protocols. Some simulation environments have been proposed recently, such as BlockSim (Maher et al., '20). However, BlockSim is developed for the simulation of an entire network on a single CPU. Therefore, it is insufficient to capture the essence of a real blockchain network, as it is not distributed and the complications such as propagation delays that occur in reality cannot be simulated realistically enough. In this work, we propose BlockSim-Net, a simple, efficient, high performance, network-based blockchain simulator, to better reflect reality.
  Read More

  • Ramachandran, P., Jhawar M. (2021). 'Privacy-Preserving Linear Regression: Efficient Multiplication Protocols in 2-party and 3-party Settings'. Capstone Thesis. Ashoka University, Sonipat, India.
Abstract
The aim of this work is to study privacy-preserving machine learning (PPML) for linear regression in the 2-server and 3-server settings. In this work, we have studied and implemented privacy-preserving linear regression in two frameworks, namely, SecureML (Mohassel and Zhang, IEEE-SP, 2017) in the 2-server setting and BLAZE (Patra and Suresh, NDSS, 2020) in the 3-server setting. We have extensively outlined the multiplication protocols in these two frameworks since multiplication is a central aspect of linear regression. As part of this study, we formally treat ring arithmetic over secret-shared decimal numbers in the 2- and 3-server settings by looking at them as secret sharing schemes. We propose an optimisation in the multiplication protocol of BLAZE that decreases the communication complexity of its gradient descent protocol on 60,000 data points from the MNIST dataset (with batch size 128) by 3765 MB in each epoch. Our implementation can be found here.
Conference Papers
  • [Accepted at PPAI workshop, AAAI 2022] More, Y., Ramachandran, P., Panda, P., Mondal, A., Gupta, D. (2022). 'SCOTCH: : An Efficient Secure-Computation Framework for Secure Aggregation and Applications'.
Abstract
Federated learning enables multiple data owners to jointly train a machine learning model without revealing their private datasets. However, a malicious aggregation server might use the model parameters to derive sensitive information about the training dataset used. To address such leakage, differential privacy and cryptographic techniques have been investigated in prior work, but these often result in large communication overheads or impact model performance. To mitigate this centralization of power, we propose SCOTCH, a decentralized m-party secure-computation framework for federated aggregation that deploys MPC primitives, such as secret sharing. Our protocol is simple, efficient, and provides strict privacy guarantees against curious aggregators or colluding data-owners with minimal communication overheads compared to other existing state-of-the-art privacy-preserving federated learning frameworks. We evaluate our framework by performing extensive experiments on multiple datasets with promising results. SCOTCH can train the standard MLP NN with the training dataset split amongst 3 participating users and 3 aggregating servers with 96.57% accuracy on MNIST, and 98.40% accuracy on the Extended MNIST (digits) dataset, while providing various optimizations.
  Read More

  • Ramachandran, P., Agarwal, S., Shah, A., Mondal, A., Gupta, D. (2021). 'S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training'. Privacy-Preserving Artificial Intelligence Workshop, Association for the Advancement of Artificial Intelligence 2021.
Abstract
We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary.
Previous work in this area has been limited to just one specific activation function: ReLU, rendering the approach impractical for many use cases. For the first time, we provide fast and verifiable protocols for all common activation functions and optimize them for running in a secret-shared manner. The ability to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid, etc. allows us to use previously written NNs without modification, vastly reducing developer effort and complexity of code. In recent times, ReLU has been found to converge much faster and be more computationally efficient as compared to non-linear functions like sigmoid/tanh. However, we argue that it would be remiss not to extend the mechanism to non-linear functions such as the logistic sigmoid, tanh, and softmax that are fundamental due to their ability to express outputs as probabilities and universal approximation property. Recent advances in improvement of sigmoid and tanh functions and usecases in RNNs and LSTMs also make them more relevant than ever.
  Read More

  • Agrawal, N.*, Prashanthi, R.*, Biçer, O., & Küpçü, A. (2020). 'BlockSim-Net: A Network Based Blockchain Simulator'. BAŞARIM 2020: National High Performance Computing Conference. Ankara Yıldırım Beyazıt University, Ankara, Turkey, 8-9 October 2020.
Abstract
Since its proposal by Eyal and Sirer (CACM '13), selfish mining attack on proof-of-work blockchains has been studied extensively in terms of both improving its impact and defending against it. Before any defense is deployed in a real world blockchain system, it needs to be tested for security and dependability. However, real blockchain systems are too complex to conduct any test on or benchmark the developed protocols. Some simulation environments have been proposed recently, such as BlockSim (Maher et al., '20). However, BlockSim is developed for the simulation of an entire network on a single CPU. Therefore, it is insufficient to capture the essence of a real blockchain network, as it is not distributed and the complications such as propagation delays that occur in reality cannot be simulated realistically enough. In this work, we propose BlockSim-Net, a simple, efficient, high performance, network-based blockchain simulator, to better reflect reality.
  Read More

(* - indicates equal contribution)

Teaching and Leadership