We study the obstacles to constructing metastable de Sitter space in string theory. We explain that it is very difficult to find stationary points for which both the string coupling is small and compactification radii are large even allowing the possibility of arbitrarily large fluxes, and a set of small perturbations of any would-be metastable de Sitter state, classically, will evolve to uncontrollable singularities. We study the Transplanckian Censorship Conjecture and show a conflict between the TCCand conventional conjectures about the string landscape. We calculate, as a function of the primordial black holes mass and initial abundance, the combination of dark matter particle masses and number of effective dark degrees of freedom leading to the right abundance of dark matter today, whether or not evaporation stops around the Planck scale.
Sensor network research is still in its infancy. Few real systems are deployed and little experimental data from sensor networks is available to test proposed protocol designs. Due to lack of experimental data and sophisticated models derived from such data, most data processing algorithms from the sensor network literature are evaluated with data generated from simple parametric models.
We identify a few widely-studied classes of problems that are potentially sensitive to data input: Statistics estimation of the field data; Data compression; and Field estimation. We use them as examples to investigate the dependency of algorithm performance on data.
For each class of problem, given the selected problem and algorithm instance, we systematically study how the algorithm performance varies across a range of data input. We also demonstrate how different data input can change the algorithm performance dramatically, the performance comparison between two algorithms may even change depending on the different data inputs.
In the end, we propose our synthetic data generation framework and recommend evaluating algorithms across a wide range of data input.
In many principal-agent relations, objective measures of the agents' performance are not available. In those cases, the principals have to rely on subjective performance measures for designing incentive schemes. Incentive schemes based on subjective performance measures open the possibilities for in influencing activities by the agents. This paper extends Lazear and Rosen's (1981) model of rank-order tournaments by considering further competition between the agents in a bribery game after production but before selection of the winner. The paper studies how the bribery game affects the principal's design of the rank-order tournament and how the anticipation of the bribery game affects the agents' effort choices.
Alzheimer’s Disease (AD) is a devastatingly fatal neurodegenerative disease and a leading cause of dementia around the world. Despite the great medical need, none of the current FDA-approved AD drugs are disease-modifying. Based on a wealth of cell biological and pathological data, amyloid precursor protein (APP) overexpression is believed to contribute to the degeneration of basal forebrain cholinergic neurons (BFCNs) in initiating AD pathogenesis. Therefore, we proposed that reducing APP expression in BFCNs would be a promising AD therapeutic strategy. Antisense oligonucleotides (ASOs) enable selective and potent degradation of target mRNA and are emerging as a powerful new tool for treating Central Nervous System (CNS) disorders. However, a targeting domain is needed to selectively deliver APP ASOs to BFCNs to prevent CNS-wide APP knockdown. To achieve this goal, I developed antibody-RNA conjugates (ARCs) that combine the superb specificity of anti-TrkA antibodies and the potency of APP ASOs to target APP knockdown in BFCNs. To generate ARCs, I explored traditional lysine-based chemical conjugation and site-specific microbial transglutaminase (MTG) biochemical conjugation approaches, each required different antibody routes and conjugation chemistries. Lysine-based conjugation relied on random chemical conjugation through accessible lysines on the antibody surface. Although this approach was effective, it resulted in conjugate heterogeneity and a distribution of Drug:Antibody Ratios (DAR). To produce homogeneous DAR-2 ARCs, I explored site-specific MTG conjugation that targeted conjugation to the engineered tag at the antibody C-terminus. Together, my thesis outlined the framework for the development of ARCs that can be applicable to other neurological disorders.
The emergence of sensor networks as one of the dominant technology trends in the coming decades [1] has posed numerous unique challenges to researchers. These networks are likely to be composed of hundreds, and potentially thousands of tiny sensor nodes, functioning autonomously, and in many cases, without access to renewable energy resources. Cost constraints and the need for ubiquitous, invisible deployments will result in small sized, resource-constrained sensor nodes.
While the set of challenges in sensor networks are diverse, we focus on fundamental networking challenges in this paper. The key networking challenges in sensor networks that we discuss are: (a) supporting multi-hop communication while limiting radio operation to conserve power, (b) data management, including frameworks that support attribute-based data naming, routing and in-network aggregation, (c) geographic routing challenges in networks where nodes know their locations, and (d) monitoring and maintenance of such dynamic, resource-limited systems. For each of these research areas, we provide an overview of proposed solutions to the problem and discuss in detail one or few representative solutions. Finally, we illustrate how these networking components can be integrated into a complex data storage solution for sensor networks.
Distributed embedded sensor networks are now being successfully deployed in environmental monitoring of natural phenomena as well as for applications in commerce and physical security. Distributed architectures have been developed for cooperative detection, scalable data transport, and other capabilities and services. However, the complexity of environmental phenomena has introduced a new set of challenges related to sensing uncertainty associated with the unpredictable presence of obstacles to sensing that appear in the environment. These obstacles may dramatically reduce the effectiveness of distributed monitoring. Thus, a new distributed, embedded, computing attribute, self-awareness, must be developed and provided to distributed sensor systems. Selfawareness must provide the ability for a deployed system to autonomously detect and reduce its own sensing uncertainty. The physical constraints encountered by sensing require physical reconfiguration for detection and reduction of sensing uncertainty. Networked Infomechanical Systems (NIMS) consisting of distributed, embedded computing systems provides autonomous physical configuration through controlled mobility. The requirements that lead to NIMS, the implementation of NIMS technology, and its first applications are discussed here.
Monitoring of environmental phenomena with embedded networked sensing confronts the challenges of both unpredictable variability in the spatial distribution of phenomena, coupled with demands for a high spatial sampling rate in three dimensions. For example, low distortion mapping of critical solar radiation properties in forest environments may require two-dimensional spatial sampling rates of greater than 10 samples/m2 over transects exceeding 1000 m2 . Clearly, adequate sampling coverage of such a transect requires an impractically large number of sensing nodes. A new approach, Networked Infomechanical System (NIMS), has been introduced to combine autonomous-articulated and static sensor nodes enabling sufficient spatiotemporal sampling density over large transects to meet a general set of environmental mapping demands.
This paper describes our work on a critical parts of NIMS, the Task Allocation module. We present our methodologies and the two basic greedy Task Allocation policies - based on time of the task arrival (Time policy) and distance from the robot to the task (Distance policy). We present results from NIMS deployed in a forest reserve and from a lab testbed. The results show that both policies are adequate for the task of spatiotemporal sampling, but also complement each other. Finally, we suggest the future direction of research that would both help us better quantify the performance of our system and create more complex policies combining time, distance, information gain, etc.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.