Computer Science Doctoral Work
Permanent URI for this collectionhdl:10365/32551
Browse
Browsing Computer Science Doctoral Work by browse.metadata.program "Software Engineering"
Now showing 1 - 14 of 14
- Results Per Page
- Sort Options
Item Addressing Off-Nominal Behaviors in Requirements for Embedded Systems(North Dakota State University, 2015) Aceituna, DanielSystem requirements are typically specified on the assumption that the system's operating environment will behave in what is considered to be an expected and nominal manner. When gathering requirements, one concern is whether the requirements are too ambiguous to account for every possible, unintended, Off-Nominal Behavior (ONB) that the operating environment can create, which results in an undesired system state. In this dissertation, we present two automated approaches which can expose, within a set of embedded requirements, whether an ONB can result in an undesired system state. Both approaches employ a modeling technique developed as part of this dissertation called the Causal Component Model (CCM). The first approach described, uses model checking as the means of property checking requirements using temporal logic properties specifically created to oppose ONBs. To facilitate the use of model checking by requirements engineers and non-technical stakeholders who are the system domain experts, a framework for the model checker interface was developed using the CCM. The CCM serves as both a cognitive friendly input and output to the model checker. The second approach extends the CCM into a dedicated ONB property checker, which overcomes the limitations of the model checker, by not only exposing ONBs but also facilitating the correction of those ONBs. We demonstrate how both approaches can expose and help correct potential Off-Nominal Behavior problems using requirements that represent real-world products. Our case studies show that both approaches can expose a system’s susceptibility to ONBs and provide enough information to correct the potential problems that can be caused by those ONBs.Item Assessment of Engineering Methodologies for Increasing CubeSat Mission Success Rates(North Dakota State University, 2021) Alanazi, AbdulazizIn the last twenty years, CubeSat Systems have gained popularity in educational institutions and commercial industries. CubeSats have attracted educators and manufacturers due to their ability to be quickly produced and their low cost, and small sizes and masses. However, while developers can swiftly design and build their CubeSats, with a team of students from different disciplines using COTS parts, this does not guarantee that the CubeSat mission will be successful. Statistics show that mission failure is frequent. For example, out of 270 “university-class” CubeSats, 139 failed in their mission between 2002 and 2016 [1]. Statistics also show that the average failure rate of CubeSat missions is higher in academic and research institutions than in commercial or government organizations. Reasons for failure include power issues, mechanical, communications and system design issues. Some researchers have suggested that the problem lies within the design and development process itself, in that CubeSat developers mainly focus on system and component level designs, while neglecting requirements elicitation and other key system engineering activities [2]. To increase the success rate of CubeSat missions, systems engineering steps and processes need to be implemented in the development cycle. Using these processes can also help CubeSat designs and systems to become more secure, reusable, and modular. This research identifies multiple independent variables and measures their effectiveness for driving CubeSat systems’ mission success. It seeks to increase the CubeSat mission success rate by developing systems engineering methodologies and tools. It also evaluates the benefits of applying systems engineering methodologies and practices, which can be applied at different stages of CubeSat project lifecycle and across different CubeSat missions.Item Automated Framework to Improve User’s Awareness and to Categorize Friends on Online Social Networks(North Dakota State University, 2015) Barakat, RahafThe popularity of online social networks has brought up new privacy threats. These threats often arise after users willingly, but unwittingly reveal their information to a wider group of people than they actually intended. Moreover, the well adapted “friends-based” privacy control has proven to be ill-equipped to prevent dynamic information disclosure, such as in user text posts. Ironically, it fails to capture the dynamic nature of this data by reducing the problem to manual privacy management which is time-consuming, tiresome and error-prone task. This dissertation identifies an important problem with posting on social networks and proposes a unique two phase approach to the problem. First, we suggest an additional layer of security be added to social networking sites. This layer includes a framework for natural language to automatically check texts to be posted by the user and detect dangerous information disclosure so it warns the user. A set of detection rules have been developed for this purpose and tested with over 16,000 Facebook posts to confirm the detection quality. The results showed that our approach has an 85% detection rate which outperforms other existing approaches. Second, we propose utilizing trust between friends as currency to access dangerous posts. The unique feature of our approach is that the trust value is related to the absence of interaction on the given topic. To approach our goal, we defined trust metrics that can be used to determine trustworthy friends in terms of the given topic. In addition, we built a tool which calculates the metrics automatically, and then generates a list of trusted friends. Our experiments show that our approach has reasonably acceptable performance in terms of predicting friends’ interactions for the given posts. Finally, we performed some data analysis on a small set of user interaction records on Facebook to show that friends’ interaction could be triggered by certain topics.Item Developing and Validating Active Learning Engagement Strategies to Improve Students' Understanding of Programming and Software Engineering Concepts(North Dakota State University, 2020) Brown, Tamaike MarianeIntroductory computer programming course is one of the fundamental courses in computer science. Students enrolled in computer science courses at the college or university have been reported to lack motivation, and engagement when learning introductory programming (CS1). Traditional classrooms with lecture-based delivery of content do not meet the needs of the students that are being exposed to programming courses for the first time. Students enrolled in first year programming courses are better served with a platform that can provide them with a self-paced learning environment, quicker feedback, easier access to information and different level of learning content/assessment that can keep them motivated and engaged. Introductory programming courses (hereafter referred to as CS1 and CS2 courses) also include students from non-STEM majors who struggle at learning basic programming concepts. Studies report that CS1 courses nationally have high dropout rates, ranging from anywhere between 30-40% on an average. Some of the reasons cited by researchers for high dropout rate are lack of resource support, motivation, lack of engagement, lack of motivation, lack of practice and feedback, and confidence. Although the interest to address these issues in computing is expanding, the dropout rate for CS1/CS2 courses remains high. The software engineering industry often believes that the academic community is missing the mark in the education of computer science students. Employers recognize that students entering the workforce directly from university training often do not have the complete set of software development skills that they will need to be productive, especially in large software development companies.Item Development and Validation of Feedback-Based Testing Tutor Tool to Support Software Testing Pedagogy(North Dakota State University, 2020) Cordova, Lucas PascualCurrent testing education tools provide coverage deficiency feedback that either mimics industry code coverage tools or enumerates through the associated instructor tests that were absent from the student’s test suite. While useful, these types of feedback mechanisms are akin to revealing the solution and can inadvertently lead a student down a trial-and-error path, rather than using a systematic approach. In addition to an inferior learning experience, a student may become dependent on the presence of this feedback in the future. Considering these drawbacks, there exists an opportunity to develop and investigate alternative feedback mechanisms that promote positive reinforcement of testing concepts. We believe that using an inquiry-based learning approach is a better alternative (to simply providing the answers) where students can construct and reconstruct their knowledge through discovery and guided learning techniques. To facilitate this, we present Testing Tutor, a web-based assignment submission platform to support different levels of testing pedagogy via a customizable feedback engine. This dissertation is based on the experiences of using Testing Tutor at different levels of the curriculum. The results indicate that the groups using conceptual feedback produced higher-quality test suites (achieved higher average code coverage, fewer redundant tests, and higher rates of improvement) than the groups that received traditional code coverage feedback. Furthermore, students also produced higher quality test suites when the conceptual feedback was tailored to task-level for lower division student groups and self-regulating-level for upper division student groups. We plan to perform additional studies with the following objectives: 1) improve the feedback mechanisms; 2) understand the effectiveness of Testing Tutor’s feedback mechanisms at different levels of the curriculum; and 3) understand how Testing Tutor can be used as a tool for instructors to gauge learning and determine whether intervention is necessary to improve students’ learning.Item Domain Ontology Based Detection Approach to Identify Effect Types of Security Requirements upon Functional Requirements(North Dakota State University, 2015) Al-Ahmad, Bilal IbrahimRequirements engineering is a subfield of software engineering that is concerned with analyzing software requirements specifications. An important process of requirement engineering is tracing requirements to investigate relationships between requirements and other software artifacts (i.e., source code, test cases, etc.). Requirements traceability is mostly manual because of difficulties automating the process. A specific mode of tracing is inter-requirements traceability, which focuses on tracing requirements with other requirements. Investigating inter-requirements traceability is very important because it has significant influence on many activities of software engineering such as requirements implementation, consistency checking, and requirements impact change management. Several studies used different approaches to identify three types of relationships: cooperative, conflicting, and irrelevant. However, the current solutions have several shortcomings: (1) only applicable to fuzzy requirements, user requirements, and technical requirements, (2) ignoring the syntactic and semantic aspects of software requirements, and (3) little attention was given to show the influence of security requirements on functional requirements. Furthermore, several traceability tools have a lack of using predefined rules to identify relationships.Item Effective Regression Testing of Web Applications through Reusability of Resources(North Dakota State University, 2018) Eda, Madhusudana RaviRegression testing is one of the most important and costly phases of a software development project. Regression testing is performed to ensure no new faults are introduced due to changes in a software. Web applications undergo frequent changes. With such frequent changes, executing the entire regression test suite is not cost-effective. Hence, there is a need for techniques that can reduce the overall cost of regression testing. We propose two regression testing techniques that demonstrate the benefits from reusability of existing resources in reducing the costs of regression testing of web applications. Our techniques are based on PHP Analysis and Regression Testing Engine (PARTE) approach that identifies code paths that were modified between two versions of an application. We extend PARTE to introduce two components. Reusable Tests component selects existing tests that can be reused to regression test the modified version of the application, and identifies obsolete tests. Test Repair component repairs such obsolete tests. Our hypothesis is that this combined approach of identifying reusable tests and repairing obsolete tests, can reduce the overall effort of regression testing. To test our hypothesis, we conducted experiments on real-world web applications. In our first experiment, we learned that there are significant number of input values from the original version of an application that can be reused to test the modified version. Based on this learning, we conducted our second experiment to evaluate if a regression test selection technique can benefit from the reusability of input values. Results from the second experiment demonstrated that reusability of input values minimized the cost of verification of input values in selected tests, and identified obsolete tests. Findings from these two experiments encouraged us to conduct an experiment to evaluate if PARTE approach can be further extended to repair obsolete tests. Results from our third experiment showed that a few obsolete tests can be automatically repaired. Thus, these novel approaches demonstrate the benefits from the reusability of existing resources and shows how further studies can be performed to evaluate approaches that combine one or more regression testing techniques to further reduce the costs of regression testing.Item Fuzzy Reasoning Based Evolutionary Algorithms Applied to Data Mining(North Dakota State University, 2015) Chen, MinData mining and information retrieval are two difficult tasks for various reasons. First, as the volume of data increases tremendously, most of the data are complex, large, imprecise, uncertain or incomplete. Furthermore, information retrieval may be imprecise or subjective. Therefore, comprehensible and understandable results are required by the users during the process of data mining or knowledge discovery. Fuzzy logic has become an active research area because its capability of handling perceptual uncertainties, such as ambiguity or vagueness, and its excellent ability on describing nonlinear system. The study of this dissertation is focused on two main paradigms. The first paradigm focuses on applying fuzzy inductive learning on classification problems. A fuzzy classifier based on discrete particle swarm optimization and a fuzzy decision tree classifier are implemented in this paradigm. The fuzzy classifier based on discrete particle swarm optimization includes a discrete particle swarm optimization classifier and a fuzzy discrete particle swarm optimization classifier. The discrete particle swarm optimization classifier is devised and applied to discrete data. Whereas, the fuzzy discrete particle swarm optimization classifier is an improved version that can handle both discrete and continuous data to manage uncertainty and imprecision. A fuzzy decision tree classifier with a feature selection method is proposed, which is based on the ideas of mutual information and genetic algorithms. The second paradigm is fuzzy cluster analysis. The purpose is to provide efficient approaches to identify similar or dissimilar descriptions of data instances. The shapes of the clusters is either hyper-spherical or hyper-planed. A fuzzy c-means clustering approach based on particle swarm optimization, which clustering prototype is hyper-spherical, is proposed to automatically determine the optimal number of clusters. In addition, a fuzzy c-regression model, which has hyper-planed clusters, has received much attention in recent literature for nonlinear system identification and has been successfully employed in various areas. Thus, a fuzzy c-regression model clustering algorithm is applied for color image segmentation.Item A New Coupling Metric: Combining Structural and Semantic Relationships(North Dakota State University, 2014) Alenezi, Mamdouh KhalafMaintaining object-oriented software is problematic and expensive. Earlier research has revealed that complex relationships among object-oriented software entities are key reasons that make maintenance costly. Therefore, measuring the strength of these relationships has become a requirement to develop proficient techniques for software maintenance. Coupling, a measure of the interdependence among software entities, is an important property for which many software metrics have been defined. It is widely agreed that the level of coupling in a software product has consequences for its maintenance. In order to understand which aspects of coupling affect quality or other external attributes of software, this dissertation introduces a new coupling metric for object-oriented software that combines structural and semantic relationships among methods and classes. The dissertation studies the usage of the new proposed coupling metric throughout change impact analysis, predicting fault-prone and maintainable classes. Three empirical studies were performed to evaluate the new coupling metric and established three results. Firstly, the new coupling metric can be effectively used to specify other classes that might potentially affected by a change to a given class. Secondly, a significant correlation between the new coupling metric and faults has been found. Finally, it has been found that the new metric shows a good promise in predicting maintainable classes. We expect that this new software metric contributes to the improvement of the design of incremental change of software and thus lead to increasing software quality and reducing software maintenance costs.Item A Structural Metric Model to Predict the Complexity of Web Interfaces(North Dakota State University, 2017) Attaallah, Abdulaziz AhmadThe complexity of web pages has been widely investigated. Many experimental studies used several metrics to measure certain aspects of the users, tasks or GUIs. In this research, we focusing on the visual structure of web pages and how different users look at them regarding complexity. Several important measures and design elements have rarely been addressed together to study the complex nature of the visual structure. Therefore, we promoted a metric model to clarify this issue by conducting several experiments on groups of participants and using several websites from different genres. The goal is to form a metric model that can assist developers to measure more precisely the complexity of web interfaces under development. From the first experiment, we could draw the guidelines of the major entities in the metric model, and the focus was on two most important aspects of the web interfaces, which are the structural factors and elements. Thus, four main factors and three main elements were more representatives to the concept of complexity. The four factors are size, density, grouping and alignment, and the three elements are text graphics and links. Based on them we developed a structural metric model that relates these factors and elements together, and the results of the metric model are compared to the web interface users’ ratings by using statistical analysis to predict the overall complexity of web interfaces. The results of that study are very promising where they show our metric model is capable of predicting the complex nature of web interfaces with high confidence.Item Towards Change Propagating Test Models In Autonomic and Adaptive Systems(North Dakota State University, 2012) Akour, Mohammed Abd AlwahabThe major motivation for self-adaptive computing systems is the self-adjustment of the software according to a changing environment. Adaptive computing systems can add, remove, and replace their own components in response to changes in the system itself and in the operating environment of a software system. Although these systems may provide a certain degree of confidence against new environments, their structural and behavioral changes should be validated after adaptation occurs at runtime. Testing dynamically adaptive systems is extremely challenging because both the structure and behavior of the system may change during its execution. After self adaptation occurs in autonomic software, new components may be integrated to the software system. When new components are incorporated, testing them becomes vital phase for ensuring that they will interact and behave as expected. When self adaptation is about removing existing components, a predefined test set may no longer be applicable due to changes in the program structure. Investigating techniques for dynamically updating regression tests after adaptation is therefore necessary to ensure such approaches can be applied in practice. We propose a model-driven approach that is based on change propagation for synchronizing a runtime test model for a software system with the model of its component structure after dynamic adaptation. A workflow and meta-model to support the approach was provided, referred to as Test Information Propagation (TIP). To demonstrate TIP, a prototype was developed that simulates a reductive and additive change to an autonomic, service-oriented healthcare application. To demonstrate the generalization of our TIP approach to be instantiated into the domain of up-to-date runtime testing for self-adaptive software systems, the TIP approach was applied to the self-adaptive JPacman 3.0 system. To measure the accuracy of the TIP engine, we consider and compare the work of a developer who manually identifyied changes that should be performed to update the test model after self-adaptation occurs in self-adaptive systems in our study. The experiments show how TIP is highly accurate for reductive change propagation across self-adaptive systems. Promising results have been achieved in simulating the additive changes as well.Item Understanding Contextual Factors in Regression Testing Techniques(North Dakota State University, 2016) Anderson, Jeffrey RyanThe software regression testing techniques of test case reduction, selection, and prioritization are widely used and well-researched in software development. They allow for more efficient utilization of scarce testing resources in large projects, thereby increasing project quality at reduced costs. There are many data sources and techniques that have been researched, leaving software practitioners with no good way of choosing which data source or technique will be most appropriate for their project. This dissertation addresses this limitation. First, we introduce a conceptual framework for examining this area of research. Then, we perform a literature review to understand the current state of the art. Next, we performed a family of empirical studies to further investigate the thesis. Finally, we provide guidance to practitioners and researchers. In our first empirical study, we showed that advanced data mining techniques on an industrial product can improve the effectiveness of regression testing techniques. In our next study, we expanded on that research by learning a classification model. This research showed attributes such as complexity and historical failures were the most effective metrics due to a high occurrence of random test failures in the product studied. Finally, we applied the learning from the initial research and the systematic literature survey to develop novel regression testing techniques based on the attributes of an industrial product and showed these new techniques to be effective. These novel approaches included predicting performance faults from test data and customizing regression testing techniques based on usage telemetry. Further, we provide guidance to practitioners and researchers based on the findings from our empirical studies and the literature survey. This guidance will help practitioners and researchers more effectively employ and study regression testing techniques.Item Using Information Retrieval to Improve Integration Testing(North Dakota State University, 2012) Alazzam, IyadSoftware testing is an important factor of the software development process. Integration testing is an important and expensive level of the software testing process. Unfortunately, since the developers have limited time to perform integration testing and debugging and integration testing becomes very hard as the combinations grow in size, the chain of calls from one module to another grow in number, length, and complexity. This research is about providing new methodology for integration testing to reduce the number of test cases needed to a significant degree while returning as much of its effectiveness as possible. The proposed approach shows the best order in which to integrate the classes currently available for integration and the external method calls that should be tested and in their order for maximum effectiveness. Our approach limits the number of integration test cases. The integration test cases number depends mainly on the dependency among modules and on the number of the integrated classes in the application. The dependency among modules is determined by using an information retrieval technique called Latent Semantic Indexing (LSI). In addition, this research extends the mutation testing for use in integration testing as a method to evaluate the effectiveness of the integration testing process. We have developed a set of integration mutation operators to support development of integration mutation testing. We have conducted experiments based on ten Java applications. To evaluate the proposed methodology, we have created mutants using new mutation operators that exercise the integration testing. Our experiments show that the test cases killed more than 60% of the created mutants.Item Using Learning Styles to Improve Software Requirements Quality: An Empirical Investigation(North Dakota State University, 2017) Goswami, AnuragThe success of a software organization depends upon its ability to deliver a quality software product within time and budget constraints. To ensure the delivery of quality software, software inspections have proven to be an effective method that aid developers to detect and remove problems from artifacts during the early stages of software lifecycle. In spite of the reported benefits of inspection, the effectiveness of the inspection process is highly dependent on the varying ability of individual inspectors. Software engineering research focused at understanding the factors (e.g., education level, experience) that can positively impact the individual’s and team inspection effectiveness have met with limited success. This dissertation tries to leverage the psychology research on Learning Styles (LS) – a measure of an individuals’ preference to perceive and process information to help understand and improve the individual and team inspection performance. To gain quantitative and qualitative insights into the LSs of software inspectors, this dissertation reports the results from a series of empirical studies in university and industry settings to evaluate the impact of LSs on individual and team inspection performance. This dissertation aims to help software managers create effective and efficient inspection teams based on LSs and reading patterns of individual inspectors thereby improving the software quality.