Totally found 169279 items. Search in result
In conventional pipelined circuits there is only one datawave active in any pipeline stage at any time; therefore, the clock speed of the circuit is limited by the maximum stage delay in the circuit. In wave pipelining, the clock speed depends mostly on the difference between the longest and shortest path delays. In some circuit designs there are redundant elements to make the circuit less sensitive to noise, to provide higher signal driving capability, or other purposes. Also, some circuit designs include logic to detect the early completion of a computation, or to guarantee that the worst physical path delay does not equate to the worst computational delay. Prior tools for wave-pipelined circuits do not account for such design features. This research develops a computer-aided design tool to determine the maximum clock speed for wave pipelined circuits with redundant logic or where otherwise the internal circuit timing depends on the input signal values. Moreover, alternative design techniques are proposed to improve the performance of wave pipelined circuits.
Topic Signatures based on Log Likelihood Ratio (LLR) values have been a staple of Automatic Text Summarization since originally proposed over a decade ago. In my thesis I propose an alternate method for counting information units and calculating the foreground and background probabilities for LLR calculations based on the participation of an information unit in dependency relations generated from a sentence rather than the sentence itself. I develop a generic text summarization system based on the Text Analysis Conference shared task guidelines and data in order to compare the proposed method of counting with the standard approach in the context of an applied task. Each counting method and unit of information definition is run as an experiment on TAC 2010 and TAC 2011 topic-based document collections and evaluated against human model summaries using ROUGE statistical measure of n-gram overlap. Although the results of the experiments are inconclusive, the topic signatures generated by the two approaches are different in the information units they contain. I conclude that an alternate evaluation framework and a semi-abstractive approach leveraging dependency relations themselves for summary generation are possible areas for future work and research.
The issue of rapidly reconfiguring the reference trajectory under unanticipated actuator failures in order to regain lost performance or aircraft handling qualities is explored. The failures detected in real-time are compensated for by ensuring the input to the system reflects current system conditions. This thesis will also show that only the general structure of the failed system component is needed to achieve successful failure detection and reference trajectory reconfiguration. This approach allows the nominal control structure to remain unchanged in the presence of change flight and system conditions. Acceptable system performance is recovered by detecting the actuator failures in real-time. The benefit of this approach is that the modification does not alter the control gains of the closed loop system which eliminates the apprehensions associated with most adaptive control techniques. The implementation of this technique will be done on a linear ongitudinal model of an F-16 like aircraft and the efficacy of the basic approach will be shown through computer simulations.
This thesis has two parts. In the first part, we uncover and study a new superconvergence phenomenon for a wide family of discontinuous Galerkin and hybridized methods for one-dimensional convection-diffusion problems. We show that, if the numerical traces are consistent and conservative, then they superconverge at the nodes of the triangulation. In particular, if polynomials of degree at most p are used for approximation then the convergence is of order 2p + 1 for the local discontinuous Galerkin methods and the hybridized Raviart-Thomas method. This implies that by using a simple and local post-processing, we can obtain approximations that converge uniformly with order 2p + 1 in the whole computational domain. Our numerical results are in agreement with our theoretical findings. In the second part, we introduce and analyze discontinuous Galerkin methods for the Timoshenko beams. We prove that a wide family of these methods are free from the so-called shear locking, that is, the performance of the method is not affected by the thickness of the beam. We prove optimal error estimates in the L2-norm and in the energy seminorm. As we did for convection-diffusion problems, we uncover and study new superconvergence properties for these methods. We show that the numerical traces superconverge with order 2p + 1 if all the unknowns are approximated with piecewise polynomials of degree at most p. We exploit this phenomenon to define computationally inexpensive post-processing techniques which produce approximate solutions converging with order 2p + 1 uniformly throughout the computational domain. Our numerical experiments verify our theoretical findings.
Introduction: Sodium hypochorite (NaOCl) is the irrigant most often used in endodontics due to its antimicrobial activity and tissue dissolution ability. Irrigation is critical to endodontic success and several new methods have been developed [Passive Ultrasonic Irrigation (PUI); EndoActivator (EA)] to improve irrigation efficacy. Using a novel spectrophotometric method, this study evaluated NaOCL irrigant extrusion during endodontic treatment. Methods: 114 single rooted extracted teeth were decoronated to leave 15 mm of root length for each tooth. Cleaning and shaping of teeth was completed using hand and rotary instrumentation (ProTaper[Dentsply]) to apical file size #40/0.04 taper. Root ends were sealed (not apex) and placed into a microcentrifuge tube with the coronal aspect sealed (Imprint [3M] and Revolution [Kerr]) to prevent inadvertent NaOCl leakage anywhere but from the canal. 54 straight roots (0° up to 20° curvature; n=18/group) and 60 curved roots (>20° curvature; n=20/group) were included. Teeth were irrigated with 5.25% NaOCl by one of three methods: conventional syringe irrigation; passive ultrasonic irrigation (PUI), or Endoactivator (EA) irrigation. Extrusion of NaOCl was evaluated using pH sensitive dye and a spectrophotometer (Synegy HT [BioTek]). Standard curves were prepared with known amounts of irrigant to quantify amounts in unknown samples. Results: Irrigant extrusion was minimal with all methods, with most teeth showing no sodium hypochlorite extrusion (conventional 61.1%, PUI 72%, EA 78%) or minimal sodium hypochlorite extrusion (<1μl)conventional 16.7%, PUI 16.6%, EA 11%. Minor irrigant extrusion (1-3μl) occurred in 11% of teeth in all three irrigant methods of the straight root sample. One tooth in the PUI group extruded 41μl of irrigant due to an apical anomaly and was replaced to maintain n=18/group. Curved root sample showed minimal irrigant extrusion with all methods, with most teeth showing no sodium hypochlorite extrusion (conventional 65%, PUI 75%, EA 75%) or minimal sodium hypochlorite extrusion (<1μl) conventional 25%, PUI 15%, EA 5%. Minor irrigant extrusion (1-3μl) occurred in 10% of teeth in all groups of the curved root sample. Two teeth in the EA group extruded 3-10 μl of sodium hypochlorite. Conclusion: Using the PUI or EA tip to within 1 mm of the working length appears to be fairly safe, yet apical anatomy can vary, allowing extrusion of irrigant. The spectrophotometric method used here is very sensitive and can detect <1μl of sodium hypochlorite extrusion, while providing quantification of irrigant levels extruded. Use of sodium hypochlorite has been cautioned in cases of an open apex, perforation and vertical root factures, but apical anatomy should also be evaluated.
The aim of this research is to understand the complexity of community food insecurity in a rural region of the United States. In this country food security is more about improper distribution than about scarcity. Quantifying why and where food distribution is not equitable or adequate depends on the context of place. Multiple indicators of food security specified in the literature were analyzed to create a food insecurity index. The index was compared to regional insight through the use of stakeholder surveys. We further analyzed the food insecurity index for spatial autocorrelation using Geographical Information Systems (GIS) technology to look for spatial patterns in the region of potential clustering or dispersal of the phenomena. Balancing top-down and bottom-up approaches, this project uses a mixed methods approach by testing the multidimensional indicators of food insecurity to uncover potential barriers that are affecting the most vulnerable areas within the region.
The novel instrumentation of nanomanipulation coupled to nanospray mass spectrometry and its applications are presented. The nanomanipulator has the resolution of 10nm step sizes allowing for specific fine movement used to probe and characterize objects of interest. Nanospray mass spectrometry only needs a minimum sample volume of 300nl and a minimum sample size of 300attograms to analyze an analyte making it the ideal instrument to couple to nanomanipulation. The nanomanipulator is mounted to an inverted microscope and consists of 4 nano-positioners; these nano-positioners hold end-effectors and other tools used for manipulation. This original coupling has been used to enhance the current abilities of cellular probing and trace fiber analysis. Experiments have been performed to demonstrate the functionality of this instrument and its capabilities. Histidine and caffeine have been sampled directly from single fibers and analyzed. Lipid bodies from cotton seeds have been sampled indirectly and analyzed. The few applications demonstrated are only the beginning of nanomanipulation coupled to nanospray mass spectrometry and the possible applications are numerous especially with the ability to design and fabricate new end-effectors with unique abilities. Future study will be done to further the applications in direct cellular probing including toxicology studies and organelle analysis of single cells. Further studies will be directed in forensic applications of this instrument including gunshot residue sampled from fibers.
This dissertation examines how US immigrant family detention policy emerged from reinvigorated border security priorities, immigration policing practices, and international migration flows. Based on a qualitative mixed methods approach, the research traces how discourses of threat, vulnerability, and safety produce detainable child and parent subjects that displace “the family” as a legal entity. I show that immigration law relies on specific kinds of geographical knowledge, producing what I call the ‘geopolitics of vulnerability.’ More broadly, I analyze how current immigration enforcement practices work at local, national, and international scales, so that detention deters future migration as much as it penalizes existing undocumented migrants. Tracing how legal categorization, isolation, criminalization, and forced mobility discipline detained families, I show how detention bears down on migrant networks, defying individualized and national scalings of immigration law. Family detention, like the broader detention system, is authorized through overlapping forms of administrative discretion, and I analyze how the “plenary doctrine of immigration” resonates with ICE’s discretionary authority. Finally, I trace how immigrant rights advocates mobilizes conceptions of “home-like” and “prison-like” facilities, and how ICE reimagined its “residential” facilities in response. Empirically and theoretically, my project contributes the first academic study of US family detention to research on kinship, citizenship, security, geopolitics, and immigration enforcement. Keywords: Immigration, Detention, Feminism, Geopolitics, Borders.
This dissertation investigates the effect of obesity on labor market outcomes. Obesity is important for labor market outcomes. Obese people may be discriminated against by consumers or employers due to their distaste for obese people. Employers also may not want to hire obese people due to the expected health cost if the employers provide health insurance to their employees. Because of those consumers' and employers' distaste for obese people or because of these different costs, being obese may result in poor labor market outcomes in terms of wages and/or the likelihood of being employed, as well as sorting of obese people into jobs where slimness is not rewarded. This study used the National Longitudinal Survey of Youth 1979 (NLSY79). The NLSY79 provides panel information for a nationally representative sample of 12,686 young men and women who were 14 to 22 years old when first surveyed in 1979. The sample was followed for 14 years. Labor market outcomes were measured by (1) the probability of employment, and (2) the probability of holding occupations where slimness potentially rewards hourly wages. Weight was measured by Body Mass Index (BMI). All results were assessed separately by gender as a function of BMI splines and other controls. The endogeneity of BMI was controlled in a two-stage instrumental variable estimation model with over-identifying exogenous individual and state-level instruments, controlling for individual fixed effects. The Heckman selection model was used to control for the selection into the labor force, with the state-level identifying instruments of the nonemployment rate, the number of business establishments, and the number of Social Security Program beneficiaries. Results show that gaining weight adversely affects labor market outcomes for women, but the effect is mixed for men overall. The size and direction of the effects vary by gender, age groups, and type of occupations. Findings from this investigation could help our understanding of the economic cost of obesity to an individual beside its adverse effect on health. The spillover effect of obesity will increase the total cost of obesity to both individuals and society as a whole.
Classification algorithms represent a rich set of tools, which train a classification model from a given training and test set, to classify previously unseen test instances. Although existing methods have studied classification algorithm performance with respect to feature selection, noise condition, and sample distributions, our existing studies have not addressed an important issue on the classification algorithm performance relating to feature deletion and addition. In this thesis, we carry out sensitive study of classification algorithms by using feature deletion and addition. Three types of classifiers: (1) weak classifiers; (2) generic and strong classifiers; and (3) ensemble classifiers are validated on three types of data (1) feature dimension data, (2) gene expression data and (3) biomedical document data. In the experiments, we continuously add redundant features to the training and test set in order to observe the classification algorithm performance, and also continuously remove features to find the performance of the underlying classifiers. Our studies draw a number of important findings, which will help data mining and machine learning community under the genuine performance of common classification algorithms on real-world data.