Industrial & Manufacturing Engineering Doctoral Work

Permanent URI for this collectionhdl:10365/32562

Browse

Recent Submissions

Now showing 1 - 20 of 23
  • Item
    Implementing Industry 4.0: a study of socio-technical readiness among manufacturers in Minnesota and North Dakota
    (North Dakota State University, 2024) Roth, Katherine
    The implementation of Industry 4.0 has become increasingly prevalent in the manufacturing industry since its inception. With the introduction of these newer technologies, changes in personnel and organizational structures occur. The purposeful joint optimization of social and technical factors of organizations is imperative to the successful adoption of Industry 4.0. Thus, the socio-technical system theory addresses a holistic design of human, technology, and organization subsystems of the manufacturing process and their interdependencies. This dissertation investigates the progress made towards implementing Industry 4.0 by small, medium, and large manufacturers in Minnesota and North Dakota. The outcomes of two surveys conducted among a group in Minnesota and North Dakota are analyzed and the results are compared to national and international data. This research identifies potential challenges, as well as, advantages in the current socio-economic landscape for manufacturers that may be either impeding or encouraging the development of a competitive and sustainable manufacturing business. As well, the implementation of flexible work arrangements in the modern work environment has increased in recent years. The first survey posed questions based on a socio-technical theory framework, Industry 4.0, and productivity outcomes. Insights were provided as to how regional manufacturers were utilizing the socio-technical design framework to integrate Industry 4.0 into the organizational design and extract value, such as increased productivity. The joint optimization of social and technical factors within an organization is necessary for the successful adoption of hybrid work environments. The outcomes of the second survey conducted among a group of small, medium, and large manufacturers in Minnesota and North Dakota were assessed the level of socio-technical readiness among regional manufacturers. The survey posed questions based on socio-technical design, digital maturity, organizational learning, responsible autonomy, leadership, communication strategies, and reduced work week schedules. Insights were provided as to how these critical factors support sustainability initiatives, such as reduced work week schedules. As a result of the surveys, a socio-technical strengths, weaknesses, opportunities, and threats (SWOT) analysis framework to complete was proposed to guide the organization through the Industry 4.0 implementation process, assess opportunities for the reduction of work hours, and facilitate the strategic enterprise-wide buy-in from employees and diverse stakeholders.
  • Item
    A Study on Deep Learning for Prognostics and Health Management Applications: An Evolutionary Convolutional Long Short-Term Memory Deep Neural Network Data-Driven Model for Prognostics of Aircraft Gas Turbine
    (North Dakota State University, 2022) Khumprom, Phattara
    The fundamental concept of prognostics and health management (PHM) within the scope of Condition-Based Maintenance (CBM) is to find an approach to evaluate the system health and predict its remaining useful life (RUL). Many methods and algorithms have been proposed for PHM modeling, most of which have been proven to perform relatively well. One of the leading algorithms in the current data-driven technology era is a deep learning approach, which is based on the concept of multiple hidden layers in a neural network. RUL prediction is an important part of PHM, which is the science that is aimed at increasing the reliability of the system and, in turn, reducing the maintenance cost and potential failure. The majority of the PHM models proposed during the past few years have shown a significant increase in the number systems that are data-driven. While more complex data-driven models are often associated with higher accuracy, there is a corresponding need to reduce model complexity. One possible approach is to reduce the complexity of the model is to use the features (attributes or variables) selection and dimensionality reduction methods before the model training process. In this work, the effectiveness of multiple search-based methods that seek for the best features set to perform model training, which included, filter and wrapper feature selection methods (correlation analysis, relief forward/backward selection, and others), along with Principal Component Analysis (PCA) as a dimensionality reduction method, was investigated. A basic algorithm of deep learning, Feedforward Artificial Neural Network (FFNN), was used as a benchmark modeling algorithm. It is believed that all of those approaches can also be applied to the prognostics of an aircraft engine. The aircraft engine data from NASA Ames prognostics data repository was used to test the effectiveness of the filter and wrapper feature selection methods. The findings show that applying feature selection methods helps to improve overall model accuracy by 3% to 5% compared to other existing works and significantly reduces the complexity by using 7 out of 21 less input nodes for the deep learning type of models.
  • Item
    Advanced Numerical Modeling in Manufacturing Processes
    (North Dakota State University, 2022) Dey, Arup
    In manufacturing applications, a large number of data can be collected by experimental studies and/or sensors. This collected data is vital to improving process efficiency, scheduling maintenance activities, and predicting target variables. This dissertation explores a wide range of numerical modeling techniques that use data for manufacturing applications. Ignorance of uncertainty and the physical principle of a system are shortcomings of the existing methods. Besides, different methods are proposed to overcome the shortcomings by incorporating uncertainty and physics-based knowledge. In the first part of this dissertation, artificial neural networks (ANNs) are applied to develop a functional relationship between input and target variables and process parameter optimization. The second part evaluates the robust response surface optimization (RRSO) to quantify different sources of uncertainty in numerical analysis. Additionally, a framework based on the Bayesian network (BN) approach is proposed to support decision-making. Due to various uncertainties, estimating interval and probability distribution are often more helpful than deterministic point value estimation. Thus, the Monte Carlo (MC) dropout-based interval prediction technique is explored in the third part of this dissertation. A conservative interval prediction technique for the linear and polynomial regression model is also developed using linear optimization. Applications of different data-driven methods in manufacturing are useful to analyze situations, gain insights, and make essential decisions. But, the prediction by data-driven methods may be physically inconsistent. Thus, in the fourth part of this dissertation, a physics-informed machine learning (PIML) technique is proposed to incorporate physics-based knowledge with collected data for improving prediction accuracy and generating physically consistent outcomes. Each numerical analysis section is presented with case studies that involve conventional or additive manufacturing applications. Based on various case studies carried out, it can be concluded that advanced numerical modeling methods are essential to be incorporated in manufacturing applications to gain advantages in the era of Industry 4.0 and Industry 5.0. Although the case study for the advanced numerical modeling proposed in this dissertation is only presented in manufacturing-related applications, the methods presented in this dissertation is not exhaustive to manufacturing application and can also be expanded to other data-driven engineering and system applications.
  • Item
    Modeling and Solving Multi-Product Multi-Layer Location-Routing Problems
    (North Dakota State University, 2011) Hamidi, Mohsen
    Distribution is a very important component of logistics and supply chain management. Location-Routing Problem (LRP) simultaneously takes into consideration location, allocation, and vehicle routing decisions to design an optimal distribution network. Multi-layer and multi-product LRP is even more complex as it deals with the decisions at multiple layers of a distribution network where multiple products are transported within and between layers of the network. This dissertation focuses on modeling and solving complicated four-layer and multi-product LRPs which have not been tackled yet. The four-layer LRP represents a multi-product distribution network consisting of plants, central depots, regional depots, and customers. The LRP integrates location, allocation, vehicle routing, and transshipment problems. Through the modeling phase, the structure, assumptions, and limitations of the distribution network are defined and the mathematical optimization programming model that can be used to obtain optimal solutions is developed. Since the mathematical model can obtain the optimal solution only for small-size problems, through the solving phase metaheuristic algorithms are developed to solve large-size problems. GRASP (Greedy Randomized Adaptive Search Procedure), probabilistic tabu search, local search techniques, the Clarke-Wright Savings algorithm, and a node ejection chains algorithm are combined to solve two versions of the four-layer LRP. Results show that the metaheuristic can solve the problem effectively in terms of computational time and solution quality. The presented four-layer LRP, which considers realistic assumptions and limitations such as producing multiple products, limited plant production capacity, limited depot and vehicle capacity, and limited traveling distances, enables companies to mimic the real world limitations and obtain realistic results. The main objective of this research is to develop solution algorithms that can solve large-size multi-product multi-layer LRPs and produce high-quality solutions in a reasonable amount of time.
  • Item
    Predictive Reliability Analysis and Maintenance Planning of Complex Systems Working under Dynamic Operating Conditions
    (North Dakota State University, 2022) Forouzandeh Shahraki, Ameneh
    Predictive analytics has multiple facets that range from failure predictability and optimal asset management to high-level managerial insights. Predicting the failure time of assets and estimating their reliability through efficient prognostics and reliability assessment framework allow for appropriate maintenance actions to avoid catastrophic failures and reduce maintenance costs. Most of the systems used in the manufacturing and service sectors are composed of multiple interdependent components. Moreover, these systems experience dynamic operating conditions during their life. The dynamic operating conditions and the system complexity pose three challenging questions: how to perform the prognostic and reliability assessment of a complex multi-component system, how to perform the prognostic and reliability assessment of a system functioning under dynamic operating conditions, and how to use the condition based and reliability assessment data to find the optimal maintenance strategy for complex systems. This dissertation investigates five tasks to address these challenges. (1) To capture the stochastic dependency between interdependent components of a system through a continuous time Markov process with the transition rate depending on the state of all components of the system. This technique helps get an accurate estimation of system reliability. (2) To propose a framework based on instance-based learning to predict the remaining useful life (RUL) of a complex system. This technique can be used for highly complex systems with no need of having prior expertise on the system behavior. (3) To incorporate time-varying operating conditions in the prognostics framework through a proportional hazards model with external covariates dependent on the operating condition and internal covariates dependent on the degradation state of the system. (4) To propose a prognostic framework based on deep learning to predict the RUL of a system working in dynamic operating conditions. This framework has two main steps: first identifying the degrading point and developing the Long Short-Term Memory model to predict the RUL. (5) To propose an efficient algorithm for reliability analysis of a phased-mission system, its behavior changes at different phases during the mission. This technique accounts for imperfect fault coverage for the components to get accurate reliability analysis.
  • Item
    Some Studies on Reliability Analysis of Complex Cyber-Physical Systems
    (North Dakota State University, 2021) Davila Frias, Alex Vicente
    Cyber-physical systems (CPSs), a term coined in 2006 refers to the integration of computation with physical processes. Particularly, modern critical infrastructures are examples of CPSs, like smart electric power grids, intelligent water distribution networks, and intelligent transportation systems. CPSs provide critical services that have great impact in nation’s economy, security, and health. Therefore, reliability is a primary metric. Nevertheless, the study of complex CPSs reliability demands understanding the joint dynamics of physical processes, hardware, software, and networks. In the present research, a series of studies is proposed to contribute to the challenging reliability analysis of CPSs by considering the reliability of physical components, hardware/software interactions, and overall reliability of CPSs modeled as networks. First, emerging technologies such as flexible electronics combined with data analytics and artificial intelligence, are now part of modern CPSs. In the present work, accelerated degradation testing (ADT) design and data analysis is considered for flexible hybrid electronic (FHE) devices, which can be part of the physical components or sensors of a CPS. Second, an important aspect of CPS is the interaction between hardware and software. Most of the existing work assume independency between hardware and software. In this work, a probabilistic approach is proposed to model such interactions using a Markov model and Monte Carlo simulation. Third, networks have been widely used to model CPSs reliability because they both have interconnected components. Estimating the network reliability by using traditional artificial neural networks (ANNs) has emerged as a promissory alternative to classical exact NP-hard algorithms; however, modern machine learning techniques have not been fully studied as reliability estimators for networks. This dissertation proposes the use of advanced deep learning (DL) techniques such as convolutional neural networks (CNNs) and deep neural networks (DNNs) for all-terminal network reliability estimation problem. DL techniques provide higher accuracy in reliability prediction as well as the possibility to dispense with computationally expensive inputs such the reliability upper bound. In addition, most of the previous works assume binary states for the components of networks, whereas the present work incorporates a Bayesian method to consider degradation for network reliability estimation and updating of parameters.
  • Item
    Study of Organizational Transformation from Socio-Technical Perspective
    (North Dakota State University, 2020) Rahaman, Md Mahabubur
    Organizations are constantly striving for effective and flexible means for managing challenges due to globalization and increasing customer expectations. Many in business community attempted to implement the Toyota Production System, or lean in their organizations to address the challenges. While the intent in many cases were to create a more flexible, effective and efficient organizations that meets the challenges of survival under external and internal pressures. However, the existing body of knowledge on lean is disperse and diverse in nature with respect to the application and implementation of lean tools and practices, making it difficult for researchers and practitioners to gain a real grasp of this topic. This research not only organizes the existing work on implementing lean but also documents challenges of implementation. The primary goal of this research is to study the organizational change and lean transformation from socio-technical perspective. In the process of discovery and empirical research, this work first, identifies challenges of organizational lean transformation. Second, it discovered organizational constructs from socio-technical perspective that has relevance on organizational challenge and lean transformation. Third, it proposed a hypothetical model, create a measurement model for predicting organizational change and lean transformation. Finally, this research tested a set of hypotheses. An Exploratory factor analysis (EFA) and subsequently a confirmatory analysis (CFA) was performed to identify the significance of latent organizational factors from socio-technical perspective as well as provide a theoretical model based on model fit indices exploiting path analysis (PA). This research contributed in providing a meaningful framework for organizational change and lean transformation and develop an instrument for measuring the organizational change and lean transformation for analyzing the gap or identify challenges in lean implementation from socio-technical perspective at organizational levels.
  • Item
    Two Applications of Combinatorial Branch-and-Bound in Complex Networks and Transportation
    (North Dakota State University, 2020) Rasti, Saeid
    In this dissertation, we show two significant applications of combinatorial branch-and-bound as an exact solution methodology in combinatorial optimization problems. In the first problem, we propose a set of new group centrality metrics and show their performance in estimating protein importance in protein-protein interaction networks. The centrality metrics introduced here are extensions of well-known nodal metrics (degree, betweenness, and closeness) for a set of nodes which is required to induce a specific pattern. The structures investigated range from the ``stricter'' induced stars and cliques, to a ``looser'' definition of a representative structure. We derive the computational complexity for each of the newly proposed metrics. Then, we provide mixed integer programming formulations to solve the problems exactly; due to the computational complexity of the problem and the sheer size of protein-protein interaction networks, using a commercial solver with the formulations is not always a viable option. Hence, we also propose a combinatorial branch-and-bound approach to solve the problems introduced. Finally, we conclude this work with a presentation of the performance of the proposed centrality metrics in identifying essential proteins in helicobacter pylori. In the second problem, we introduce the asymmetric probabilistic minimum-cost Hamiltonian cycle problem (APMCHCP) where arcs and vertices in the graph are possible to fail. APMCHCP has applications in many emerging areas, such as post-disaster recovery, electronic circuit design, and security maintenance of wireless sensor networks. For each vertex, we define a chance-constraint to guarantee that the probability of arriving at the vertex must be greater than or equal to a given threshold. Four mixed-integer programming (MIP) formulations are proposed for modeling the problem, including two direct formulations and two recursive formulations. A combinatorial branch-and-bound (CBB) algorithm is proposed for solving the APMCHCP, where data preprocessing steps, feasibility rules, and approaches of finding upper and lower bounds are developed. In the numerical experiments, the CBB algorithm is compared with formulations on a test-bed of two popular benchmark instance sets. The results show that the proposed CBB algorithm significantly outperforms Gurobi solver in terms of both the size of optimally solved instances and the computing time.
  • Item
    Designing Bio-Ink for Extrusion Based Bio-Printing Process
    (North Dakota State University, 2019) Habib, MD Ahasan
    Tissue regeneration using in-vitro scaffold becomes a vital mean to mimic the in-vivo counterpart due to the insufficiency of animal models to predict the applicability of drug and other physiological behavior. Three-dimensional (3D) bio-printing is an emerging technology to reproduce living tissue through controlled allocation of biomaterial and cell. Due to its bio-compatibility, natural hydrogels are commonly considered as the scaffold material in bio-printing process. However, repeatable scaffold structure with good printability and shape fidelity is a challenge with hydrogel material due to weak bonding in polymer chain. Additionally, there are intrinsic limitations for bio-printing of hydrogels due to limited cell proliferation and colonization while cells are immobilized within hydrogels and don’t spread, stretch and migrate to generate new tissue. The goal of this research is to develop a bio-ink suitable for extrusion-based bio-printing process to construct 3D scaffold. In this research, a novel hybrid hydrogel, is designed and systematic quantitative characterization are conducted to validate its printability, shape fidelity and cell viability. The outcomes are measured and quantified which demonstrate the favorable printability and shape fidelity of our proposed material. The research focuses on factors associated with pre-printing, printing and post-printing behavior of bio-ink and their biology. With the proposed hybrid hydrogel, 2 cm tall acellular 3D scaffold is fabricated with proper shape fidelity. Cell viability of the proposed material are tested with multiple cell lines i.e. BxPC3, prostate stem cancer cell, HEK 293, and Porc1 cell and about 90% viability after 15-day incubation have been achieved. The designed hybrid hydrogel demonstrate excellent behavior as bio-ink for bio-printing process which can reproduce scaffold with proper printability, shape fidelity and higher cell survivability. Additionally, the outlined characterization techniques proposed here open-up a novel avenue for quantifiable bio-ink assessment framework in lieu of their qualitative evaluation.
  • Item
    Optimization of Regional Empty Container Supply Chains to Support Future Investment Decisions for Developing Inland Container Terminals
    (North Dakota State University, 2020) Wadhwa, Satpal Singh
    Containerized grain shipping has been increasingly used as a shipment option by U.S. exporters. Continued evolution and investment decisions in optimizing multimodal operations is a key in continued growth for the container transportation alternative. Agriculture is a leading sector in the Midwest economy. Grain production is particularly important to the natural resource-based economy of the upper Midwest. These increasing volumes of grain are being shipped in containers because containers offer opportunities to lower logistics costs and to broaden marketing options. Exporters are put at a competitive disadvantage when they are unable to obtain containers at a reasonable cost. Consequently exporters incur large costs to acquire these empty containers which are repositioned empty, from ports and intermodal hubs. When the import and export customers are located inland, empty repositioning generates excessive unproductive empty miles. To mitigate this shortage of empty containers and avoid excessive empty vehicle miles, this research proposes to strategically establish inland depots in regions with sufficiently high agriculture trade volumes. Mathematical models are formulated to evaluate the proposed system to determine the optimal number and location of inland depots in region under varying demand conditions. An agent-based model simulates the complex regional empty container supply chain based on rational individual decisions. The model provides insight into the role of establishing new depot facilities, have on reducing the empty repositioning miles while increasing the grain exports in the region. Model parameters are used to simulate the impact of train frequency and velocity, truck and rail drayage, demand changes at elevators and depot capacity. For the proposed system, stakeholders will be able to quantify the economic impacts of discrete factors like adjustments of the rail and truck rates and impacts of elevator storage capacity. The initial model is limited to a single state (MN) and export market. It could be enhanced to present a flexible logistical scenario assessment tool which is of great help to make investment decisions for improving the efficiency of multimodal transportation. The model can be applied similarly to other commodities and/or be used to analyze the potential for new intermodal points.
  • Item
    Form and Functionality of Additively Manufactured Parts with Internal Structure
    (North Dakota State University, 2019) Ahsan, AMM Nazmul
    The tool-less additive manufacturing (AM) or 3D printing processes (3DP) use incremental consolidation of feed-stock materials to construct part. The layer by layer AM processes can achieve spatial material distribution and desired microstructure pattern with high resolution. This unique characteristics of AM can bring custom-made form and tailored functionality within the same object. However, incorporating form and functionality has their own challenge in both design and manufacturing domain. This research focuses on designing manufacturable topology by marrying form and functionality in additively manufactured part using infill structure. To realize the goal, this thesis presents a systematic design framework that focuses on reducing the gap between design and manufacturing of complex architecture. The objective is to develop a design methodology of lattice infill and thin shell structure suitable for additive manufacturing processes. Particularly, custom algorithmic approaches have been developed to adapt the existing porous structural patterns for both interior and exterior of objects considering application specific functionality requirements. The object segmentation and shell perforation methodology proposed in this work ensures manufacturability of large scale thin shell or hollowed objects and incorporates tailored part functionality. Furthermore, a computational design framework developed for tissue scaffold structures incorporates the actual structural heterogeneity of natural bones obtained from their medical images to facilitate the tissue regeneration process. The manufacturability is considered in the design process and the performances are measured after their fabrication. Thus, the present thesis demonstrates how the form of porous structures can be adapted to mingle with functionality requirements of the application as well as fabrication constraints. Also, this work bridges the design framework (virtual) and the manufacturing platform (realization) through intelligent data management which facilitates smooth transition of information between the two ends.
  • Item
    Assessing Reliability of Highly Reliable Products Using Accelerated Degradation Test Design, Modeling, and Bayesian Inference
    (North Dakota State University, 2019) Limon, Shah Mohammad
    The accelerated degradation test methods have proven to be a very effective approach to quickly evaluate the reliability of highly reliable products. However, the modeling of accelerated degradation test data to estimate reliability at normal operating condition is still a challenging task especially in the presence of multi-stress factors. In this study, a nonstationary gamma process is considered to model the degradation behavior assuming the strict monotonicity and non-negative nature of the product deterioration. It further assumes that both the gamma process parameters are stress dependent. A maximum likelihood method has been used for the model parameter estimation. The case study results indicate that traditional models that assume only shape parameter as stress dependent underestimate the product reliability significantly at normal operating conditions. This study further revealed that the scale parameter at a higher stress level is very close to the traditional constant assumption. However, at the normal operating condition, scale parameter value differs significantly with the traditional constant assumption value. This difference leads to the larger difference of reliability and lifetime estimates provided by the proposed approach. A Monte Carlo simulation with the Bayesian updating method has been incorporated to update the gamma parameters and reliability estimates when additional degradation data become available. A generalized reliability estimation framework for using the ADT data is also presented in this work. Further, in this work, an optimal constant-stress accelerated degradation test plan is presented considering the gamma process. The optimization criteria are set by minimizing the asymptotic variance of the maximum likelihood estimator of the lifetime at operating condition under total experimental cost constraint. A heuristic based more specifically genetic algorithm approach has been implemented to solve the model. Additionally, a sensitivity analysis is performed which revealed that increasing budget causes longer test duration time with smaller sample size. Also, it reduces the asymptotic variance of the estimation which is very intuitive as more budget increase the possibility to generate more degradation information and helps to increase the estimation accuracy. The overall reliability assessment methodology and the test design has been demonstrated using the carbon-film resistor degradation data.
  • Item
    Integrated Projection and Regression Models for Monitoring Multivariate Autocorrelated Cascade Processes
    (North Dakota State University, 2014) Khan, Anakaorn
    This dissertation presents a comprehensive methodology of dual monitoring for the multivariate autocorrelated cascade processes using principal component analysis and regression. Principle Components Analysis is used to alleviate the multicollinearity among input process variables and reduce the dimension of the variables. An integrated principal components selection rule is proposed to reduce the number of input variables. An autoregressive time series model is used and imposed on the time correlated output variable which depends on many multicorrelated process input variables. A generalized least squares principal component regression is used to describe the relationship between product and process variables under the autoregressive regression error model. The combined residual based EWMA control chart, applied to the product characteristics, and the MEWMA control charts applied to the multivariate autocorrelated cascade process characteristics, are proposed. The dual EWMA and MEWMA control chart has advantage and capability over the conventional residual type control chart applied to the residuals of the principal component regression by monitoring both product and the process characteristics simultaneously. The EWMA control chart is used to increase the detection performance, especially in the case of small mean shifts. The MEWMA is applied to the selected set of variables from the first principal component with the aim of increasing the sensitivity in detecting process failures. The dual implementation control chart for product and process characteristics enhances both the detection and the prediction performance of the monitoring system of the multivariate autocorrelated cascade processes. The proposed methodology is demonstrated through an example of the sugar-beet pulp drying process. A general guideline for controlling multivariate autocorrelated processes is also developed.
  • Item
    Optimization Models for Scheduling and Rescheduling Elective Surgery Patients Under the Constraint of Downstream Units
    (North Dakota State University, 2013) Erdem, Ergin
    Healthcare is a unique industry in terms of the associated requirements and services provided to patients. Currently, healthcare industry is facing challenges of reducing the cost and improving the quality and accessibility of service. Operating room is one of the biggest major cost and revenue centers in any healthcare facility. In this study, we develop optimization models and the corresponding solution strategies for addressing the problem of scheduling and rescheduling of the elective patients for surgical operations in the operating room. In the first stage, scheduling of the elective patients based on the availability of the resources is optimized. The resources considered in the study are the availability of the operating rooms, surgical teams, and the beds/equipment in the downstream post anesthesia care units (PACUs). Discrete distributions governing surgical durations for selected surgical specialties are developed for representing variability for duration of surgery. Based on the distributions, a stochastic mathematical programming model is developed. It is indicated that with the increase of problem sizes, the model may not be solved by using a leading commercial solver for optimization problems. As a result, a heuristic solution approach based on genetic algorithm is also developed. It is found out that the genetic algorithm provides close results as compared to the commercial solver in terms of solution quality. For large problem sizes, where the commercial solver is unable to solve the problem due to the memory restrictions, the genetic algorithm based approach is able to find a solution within a reasonable amount of computation time. In the second stage, the rescheduling of the elective patients due to the sudden arrival of the emergency patients is considered. A mathematical programming model for minimizing the costs related with expanding the current capacity and disruption caused by the inclusion of the emergency patient is developed. Also, two different solution approaches are brought forward, one with using the commercial solver, and the other based on genetic algorithm. Genetic algorithm based approach can always make efficient decision regarding whether to accept the emergency patients and how to minimize the reshuffling effort of the original elective surgery schedule.
  • Item
    Stochastic Optimization of Sustainable Industrial Symbiosis Based Hybrid Generation Bioethanol Supply Chains
    (North Dakota State University, 2013) Gonela, Vinay
    Bioethanol is becoming increasingly attractive for the reasons of energy security, diversity, and sustainability. As a result, the use of bioethanol for transportation purposes has been encouraged extensively. However, designing an effective bioethanol supply chain that is both sustainable and robust is still questionable. Therefore, this research focuses on designing a bioethanol supply chain that is: 1) sustainable in improving economic, environmental, social, and energy efficiency aspects; and 2) robust to uncertainties such as bioethanol price, bioethanol demand and biomass yield. In this research, we first propose a decision framework to design an optimal bioenergy-based industrial symbiosis (BBIS) under certain constraints. In BBIS, traditionally separate plants collocate in order to efficiently utilize resources, reduce wastes and increase profits for the entire BBIS and each player in the BBIS. The decision framework combines linear programming models and large scale mixed integer linear programming model to determine: 1) best possible combination of plants to form the BBIS, and 2) the optimal multi-product network of various materials in the BBIS, such that the bioethanol production cost is reduced. Secondly, a sustainable hybrid generation bioethanol supply chain (HGBSC), which consists of 1st generation and 2nd generation bioethanol production, is designed to improve economic benefits under environmental and social restrictions. In this study, an optimal HGBSC is designed where the new 2nd generation bioethanol supply chain is integrated with the existing 1st generation bioethanol supply chain under uncertainties such as bioethanol price, bioethanol demand and biomass yield. A stochastic mixed integer linear programming (SMILP) model is developed to design the optimal configuration of HGBSC under different sustainability standards. Finally, a sustainable industrial symbiosis based hybrid generation bioethanol supply chain (ISHGBSC) is designed that incorporates various industrial symbiosis (IS) configurations into HGBSC to improve economic, environmental, social, and energy efficiency aspects of sustainability under bioethanol price, bioethanol demand and biomass yield uncertainties. A SMILP model is proposed to design the optimal ISHGBSC and Sampling Average Approximation algorithm is used as the solution technology. Case studies of North Dakota are used as an application. The results provide managerial insights about the benefits of BBIS configurations within HGBSC.
  • Item
    Modeling and Optimization of Biofuel Supply Chain Considering Uncertainties, Hedging Strategies, and Sustainability Concepts
    (North Dakota State University, 2013) Awudu, Iddrisu Kaasowa
    Due to energy crisis and environmental concerns, alternative energy has attracted a lot of attention in both industry and academia. Biofuel is one type of renewable energy that can reduce the reliance on fossil fuel, and also help reduce environmental effect and provide social benefits. However, to deliver a competitive biofuel product requires a robust supply chain. The biofuel supply chain (BSC) consists of raw material sourcing, transporting of raw materials to pre-treatment and biorefinery sites, pre-treating the raw material, biofuel production, and transporting of the produced biofuel to the final demand zones. As uncertainties are involved throughout the supply chain, risks are introduced. We first propose a stochastic production planning model for a biofuel supply chain under demand and price uncertainties. A stochastic linear programming model is proposed and Benders decomposition (BD) with Monte Carlo simulation technique is applied to solve the proposed model. A case study compares the performance of a deterministic model and the proposed stochastic model. The results indicate that the proposed model obtain higher expected profit than the deterministic model under different uncertainty settings. Sensitivity analyses are performed to gain management insights. Secondly, a hedging strategy is proposed in a hybrid generation biofuel supply chain (HGBSC). A hedging strategy can purchase corn either through futures or spot, while the ethanol end-product sale is hedged using futures. A two-stage stochastic linear programming method with hedging strategy is proposed, and a Multi-cut Benders Decomposition Algorithm is used to solve the proposed model. Prices of feedstock and ethanol end-products are modeled as a mean reversion (MR). The results for both hedging and non-hedging are compared for profit realizations, and the hedging is better as compared to non-hedging for smaller profits. Further sensitivity analyses are conducted to provide managerial insights. Finally, sustainability concepts, which include economic, environmental, and social sustainability, are incorporated in the HGBSC. A two-stage stochastic mixed integer linear programming approach is used, and the proposed HGBSC model is solved using the Lagrangean Relaxation (LR) and Sample Average Approximation (SAA). A representative case study in North Dakota is used for this study.
  • Item
    Multi-Objective Optimal Phasor Measurement Units Placement in Power Systems
    (North Dakota State University, 2014) Khiabani, Vahidhossein
    The extensive development of power networks has increased the requirements for robust, reliable and secure monitoring and control techniques based on the concept of Wide Area Measurement System (WAMS). Phasor Measurement Units (PMUs) are key elements in WAMS based operations of power systems. Most existing algorithms consider the problem of optimal PMU placement where the main objective is to ensure observability. They consider cost and observability of buses ignoring the reliability aspect of both WAMS and PMUs. Given the twin and conflicting objectives of cost and reliability, this dissertation aims to model and solve a multi-objective optimization formulation that maintains full system observability with minimum cost while exceeding a pre-specified level of reliability of observability. No unique solution exists for these conflicting objectives, hence the model finds the best tradeoffs. Given that the reliability-based PMU placement model is Non-deterministic Polynomial time hard (NP-hard), the mathematical model can only address small problems. This research accomplishes the following: (a) modeling and solving the multi-objective PMU placement model for IEEE standard test systems and its observability, and (b) developing heuristic algorithms to increase the scalability of the model and solve large problems. In short, early consideration of the reliability of observability in the PMU placement problem provides a balanced approach which increases the reliability of the power system overall and reduces the cost of reliability. The findings are helpful to show and understand the effectiveness of the proposed models. However the increased cost associated with the increased reliability would be negligible when considering cost of blackouts to commerce, industry, and society as a whole.
  • Item
    Optimization of Large-Scale Sustainable Renewable Energy Supply Chains in a Stochastic Environment
    (North Dakota State University, 2014) Osmani, Atif
    Due to the increasing demand of energy and environmental concern of fossil fuels, it is becoming increasingly important to find alternative renewable energy sources. Biofuels produced from lignocellulosic biomass feedstock's show enormous potential as a renewable resource. Electricity generated from the combustion of biomass is also one important type of bioenergy. Renewable resources like wind also show great potential as a resource for electricity generation. In order to deliver competitive renewable energy products to the end-market, robust renewable energy supply chains (RESCs) are essential. Research is needed in two distinct types of RESCs, namely: 1) lignocellulosic biomass-to-biofuel (LBSC); and 2) wind energy/biomass-to-electricity (WBBRESSC). LBSC is a complex system which consists of multiple uncertainties which include: 1) purchase price and availability of biomass feedstock; 2) sale price and demand of biofuels. To ensure LBSC sustainability, the following decisions need to be optimized: a) allocation of land for biomass cultivation; b) biorefinery sites selection; c) choice of biomass-to-biofuel conversion technology; and d) production capacity of biorefineries. The major uncertainty in a WBBRESC concerns wind speeds which impact the power output of wind farms. To ensure WBBRESC sustainability, the following decisions need to be optimized: a) site selection for installation of wind farms, biomass power plants (BMPPs), and grid stations; b) generation capacity of wind farms and BMPPs; and c) transmission capacity of power lines. The multiple uncertainties in RESCs if not jointly considered in the decision making process result in non-optimal (or even infeasible) solutions which generate lower profits, increased environmental pollution, and reduced social benefits. This research proposes a number of comprehensive mathematical models for the stochastic optimization of RESCs. The proposed large-scale stochastic mixed integer linear programming (SMILP) models are solved to optimality by using suitable decomposition methods (e.g. Bender's) and appropriate metaheuristic algorithms (e.g. Sample Average Approximation). Overall, the research outcomes will help to design robust RESCs focused towards sustainability in order to optimally utilize the renewable resources in the near future. The findings can be used by renewable energy producers to sustainably operate in an efficient (and cost effective) manner, boost the regional economy, and protect the environment.
  • Item
    Improvement of Wind Forecasting Accuracy and its Impacts on Bidding Strategy Optimization for Wind Generation Companies
    (North Dakota State University, 2012) Li, Gong
    One major issue of wind generation is its intermittence and uncertainty due to the highly volatile nature of wind resource, and it affects both the economy and the operation of the wind farms and the distribution networks. It is thus urgently needed to develop modeling methods for accurate and reliable forecasts on wind power generation. Meanwhile, along with the ongoing electricity market deregulation and liberalization, wind energy is expected to be directly auctioned in the wholesale market. This brings the wind generation companies another issue of particular importance, i.e., how to maximize the profits by optimizing the bids in the gradually deregulated electricity market based on the improved wind forecasts. As such, the main objective of this dissertation research is to investigate and develop reliable modeling methods for tackling the two issues. To reach the objective, three main research tasks are identified and accomplished. Task 1 is about testing forecasting models for wind speed and power. After a thorough investigation into currently available forecasting methods, several representative models including autoregressive integrated moving average (ARIMA) and artificial neural networks (ANN) are developed for short-term wind forecasting. The forecasting performances are evaluated and compared in terms of mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). The results reveal that no single model can outperform others universally. This indicates the need of generating a single robust and reliable forecast by applying a post-processing method. As such, a reliable and adaptive model for short-term forecasting the wind power is developed via adaptive Bayesian model averaging algorithms in Task 2. Experiments are performed for both long-term wind assessment and short-term wind forecasting. The results show that the proposed BMA-based model can always provide adaptive, reliable, and iv comparatively accurate forecast results in terms of MAE, RMSE, and MAPE. It also provides a unified approach to tackle the challenging model selection issue in wind forecasting applications. Task 3 is about developing a modeling method for optimizing the wind power bidding process in the deregulated electricity wholesale market. The optimal bids on wind power must take into account the uncertainty in wind forecasts and wind power generation. This research investigates the application of combining improved wind forecasts with agent-based models to optimize the bid and maximize the net earnings. The WSCC 9-bus 3-machine power system network and the IEEE 30-bus 9-GenCo power system network are adopted. Both single-sided and double-sided auctions are considered. The results demonstrate that improving wind forecasting accuracy helps increase the net earnings of wind generation companies, and that the implementation of agent learning algorithms further improves the earnings. The results also verify that agent-based simulation is a viable modeling tool for providing realistic insights about the complex interactions among different market participants and various market factors.
  • Item
    Electrical Performance Analysis of a Novel Embedded Chip Technology
    (North Dakota State University, 2012) Sarwar, Ferdous
    Recently ultra-thin embedded die technology gained much attention for their reduced footprint, light weight, conformality and three-dimensional assembly capabilities. The traditional flexible circuit fabrication process showed its limitations to meet the demand for increasing packaging density. The embedded die technology can be successfully utilized to develop flexible printed circuits that will satisfy the demand for reliable and high density packaging. With a tremendous application potential in wearable and disposable electronics, the reliability of the flexible embedded die package is of paramount importance. Presented is the author's contribution to the novel fabrication process for flexible packages with ultrathin (below 50 µm) dice embedded into organic polymer substrate and the results from the investigation of the electrical performance of embedded bare dice bumped using three different techniques. In this research, embedded flexible microelectronic packaging technology was developed and reliability of different packages was evaluated through JEDEC test standards based on their electrical performance. The reliability test of the developed packages suggested the better and stable performance of stud bump bonded packages. This research also covered the thinning and handling ultra-thin chips, die metallization, stud bump formation, laser ablation of polymers, and assembly of ultra-thin die. The stud bumped flexible packages that were designed and developed in this research have promising application potential in wearable RFID tags, smart textile and three dimensional-stacked packaging, among the many other application areas.