Social media, web search logs and online purchases are only some of the sources used my private and public organizations to collect information about individuals.
While, the aggregated data are valuable for research and commercial use, they can pose a direct threat to a user's privacy.
The field of privacy preserving data sharing has emerged, in order to create tools, that can enable institutions and companies to shares their clients' information, while protecting their privacy.
Differential privacy (DP) has been recognized as the de-facto privacy framework for interacting with private information.
In this thesis, we continue the work on release histograms with formal privacy guarantees.
Particularly, we investigate the effect of sorting as a technique for improving the accuracy of the final approximation.
We identify the right settings to use it, and when to avoid it.
DP aims to protect users' records from every possible type of inference attack.
As a result, the DP is a very pessimistic privacy framework that considers every piece of information as sensitive.
In the context of this thesis, we will relax the previous requirement.
We propose one-sided privacy (OSP) a novel privacy framework that is able to handle data that can be classified as sensitive or non-sensitive.
Our empirical results show that OSP can support new types of application, and offers meaningful utility, in cases that DP is known to perform poorly.