Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] MUSA 650 Homework 2 Supervised Land Use Classification with Google Earth Engine

MUSA 650 Homework 2: Supervised Land Use Classification with Google Earth Engine In this assignment, you will use Google Earth Engine via Python to implement multi-class land cover classification. You will hand-label Landsat 8 satellite images which you will then use to train a random forest model. Along the way, you will consider practical remote sensing issues like cloud cover, class imbalances, and feature selection. Given that hand-labeling data can be time-consuming, you are encouraged to work in pairs or groups of three to share the workload. You may collaborate on generating the hand-labeled data, provided you submit separate assignment files. If you choose to do this, you should all use the same ROI, of course. You are responsible for figuring out the code independently and may refer to tutorials, code examples, or use AI support, but please cite all sources. In particular, we encourage you to consult the official Python Google Earth Engine geemap package, the online course Spatial Thoughts, and the Google Earth Engine Tutorials book. Submit a single Jupyter Notebook containing code, narrative text, visualizations, and answers to each question. Please also upload your classification results as a GeoTIFF and your accuracy assessment as a CSV file. Open a pull request from your fork of this repository to the main repository for submission. 1. Setup For this assignment, you will define the region of interest (ROI) of your choice. We recommend picking an urban area large enough that you will have a sufficient sample size but not so large that it will take an excessively long time to process. You'll also use Landsat 8 satellite imagery from USGS for this assignment. Choose images from 2023, filtering for images with minimal cloud cover. 2. Data Collection and Feature Engineering 2.1 Collecting and Labeling Training Data Using the interactive geemap intereface or another approach (e.g., QGIS, ArcGIS, a GeoJSON file, etc.), create at least 100 samples (points or polygons) for each of the following four classes: urban, bare, water, and vegetation. (Again, we encourage you to work in pairs or groups of three to generate these hand labels.) Use visual cues and manual inspection to ensure that the samples are accurate. Assign each class a unique label (e.g., 0 for urban, 1 for bare, 2 for water, and 3 for vegetation) and merge the labeled samples into a single dataset. You are free to propose any labels you like, as long as 1) you include at least 4 classes, and 2) you justify why they are appropriate for a remote sensing task (for example, including a label for ice cream shops wouldn't make sense, because those can't be detected from aerial imagery). 2.2 Feature Engineering. For possible use in the model, calculate and add the following spectral indices: ● NDVI (Normalized Difference Vegetation Index) ● NDBI (Normalized Difference Built-up Index) ● MNDWI (Modified Normalized Difference Water Index) Additionally, add elevation and slope data from a DEM. Normalize all image bands to a 0 to 1 scale for consistent model input. For bonus points, consider adding kernel filters (e.g., edge detection, smoothing) to see if they improve model performance. 3. Model Training and Evaluation 3.1 Model Training Split your data into a training dataset (70%) and a validation dataset (30%). Train and evaluate a random forest model using the training set with all engineered features. After training, analyze variable importance scores to justify each feature's inclusion. Identify which features are most influential in the classification. Report the final features that you keep in your model. 3.2 Accuracy Assessment Use the trained model to classify the Landsat 8 image, creating a land cover classification map with classes for urban, bare, water, and vegetation (or whatever classes you have chosen). Using the validation data, generate a confusion matrix and calculate the overall accuracy, precision, and recall. Which classes were confused most often with each other? Why do you think this was? Visually compare your landcover data for your ROI with the corresponding landcover data from the European Space Agency. Do your classifications agree? If not, do you notice any patterns in the types of landcover where they differ, or any particular features in the imagery that are hard for your model to recognize (e.g., sand, water, or asphalt)? Export the classified image as a GeoTIFF and the confusion matrix and accuracy metrics to a CSV file for documentation. 4. Reflection Questions What limitations did you run into when completing this assignment? What might you do differently if you repeated it, or what might you change if you had more time and/or resources? What was the impact of feature engineering? Which layers most contributed to the model? Did you expect this? Why or why not? Did you find it difficult to create the training data by hand? Did you notice any issues with class imbalance? If so, how might you resolve this in the future (hint: consider a different sampling technique). Did your model perform. better on one class than another? Why? Can you think of a reason that this might be good or bad depending on the context?

$25.00 View

[SOLVED] MATH1014 Calculus II Problem Set 3 L01 Spring 2025SPSS

MATH1014 Calculus II L01 (Spring 2025) Problem Set 3 1.     (a)     Let  m   and  n   be non-negative integers.     Evaluate the following integrals, distinguishing all possible cases for  m  and  n. (b)    Let  n   be apositive integer and let  f: ℝ → ℝ   be afunction defined by f(x) = a1 sin x + a2 sin 2x + ⋯ + an sinnx , where  a1, a2, … , an    are real numbers.     Show that we must have 2.     Evaluate the following antiderivatives. Hint:         In (d),first consider  ∫ ex 2 dx. 3.     Evaluate the limit   Hint:         Take natural logarithm. 4.     Let  a  > 0   and let  f: [−a, a] → ℝ  bean odd continuous function.     Show that 5.     The  following  are “ proofs” of some obviously false statements.    Point  out  what  is  wrong in each of these “ proofs”. (a)    A “ proof” of the statement that “π  = 0”. (b)   A “ proof” of the statement that “every integral equals zero” : (c)    A “ proof” of the statement that “0  =  1”. 6.     Let  f   be afunction which is continuously differentiable on   [0, 1]. (a)    For every   a, b ∈ [0, 1], show that   (b)    Let  n  ≥ k  ≥ 1  be  integers.    Using  the  result from  (a)  and the  generalized Mean Value Theorem for integrals (Example 5.49 (a)), show that there exists  such that  (c)    Now for each  n  ∈ ℕ, we let  Show that  Hence using the result from (b),deduce that    7.     Let  f: [0, +∞) → ℝ   be the function defined by  f(x) = xex. (a)    Show that  f   is strictly increasing. (b)    Now  f  is one-to-one according to (a), so we let  g   be the inverse of  f, i.e.  g  = f −1 . (i)     Write down the domain of  g.     Show that   for every  x   in the interior of the domain of  g. (ii)    Using  the  result  from  (b)  (i)  or  otherwise,  evaluate  the  antiderivative  ∫ g(x)dx, expressing your answer in terms of  g   and other elementary functions only.  (iii)   Hence,or otherwise, evaluate the integral  8.     (a)     Let  n   beanon-negative integer, and let  f: ℝ → ℝ   be the polynomial f(x) = (x2  − 1)n. (i)     Show that  (x2  − 1)f′ (x) − 2nxf(x) = 0  for every  x  ∈ ℝ . (ii)    Hence, show that (x2  − 1)f(n+2)(x) + 2xf(n+1)(x) − n(n + 1)f(n)(x) = 0  for every  x  ∈ ℝ . Hint:         Recall “ Leibniz rule” in chapter 3.     Part (a) is almost the same as Example 3.69. (b)    For each non-negative integer  n, let  pn : ℝ → ℝ   be the function   (i)     Using the result from (a) (ii), show that   for every non-negative integer  n. (ii)    Hence deduce that if  m   and  n   are distinct non-negative integers, then   9.     For each non-negative integer  n, let   (a)    For each positive integer  n, show that   Hence show that   (b)    Using the result from (a), find the value of  In   in terms of  n.  

$25.00 View

[SOLVED] DTS311TC FINAL YEAR PROJECT

DTS311TC FINAL YEAR PROJECT The application of a trust model based on confidence and reputation to NPC trust assessment and decision- making in market simulation games Proposal Report In Partial Fulfillment of the Requirements for the Degree of Bachelor of Data science and big data technology Abstract Trust is crucial in multi-agent systems and is a key factor affecting decision-making. In market trading scenarios in particular, agents must assess each other's reliability in order to  conduct  subsequent  transactions.  This  project  aims  to  develop  a  framework  with dynamic trust calculation at its core and apply it to the trust assessment of NPCs in a market  simulation  game.  A  dynamically  adjusted  trust  value  calculation  method  is constructed by combining trust (based on direct interaction data) and reputation (based on feedback information shared between NPCs). Compared with the traditional static model, the dynamic trust mechanism in this study can respond to changes in player behavior in real time, thereby more realistically simulating complex interaction scenarios. The focus of this project is on the generation of trust values, the update algorithm and the optimization of its performance. The specific decision-making behavior of the NPC is a secondary   implementation.   Experimental verification is used to demonstrate   the applicability and performance of the model in a dynamic game environment. Contents 1  Introduction 1.1  Introduction, Background Trust  computing,  as  an important research  direction in multi-agent  systems, provides theoretical support for modeling complex interactive behaviors. Trust plays an important role in the effective interaction of multi-agent systems.  However, most existing systems rely on static or rule-based decision models, which cannot adapt to complex real-time player behaviors, resulting in an interaction experience that lacks realism. In recent years, models  capable  of  dynamically   computing  trust  have   shown  strong  potential  for application in multi-agent systems.  [1]These models can adapt to different interaction environments. In this project, a dynamic trust calculation model is applied to a market simulation game, enabling NPC to dynamically assess the player's trustworthiness based on the player's behavioral data and environmental context, thereby providing more effective support for the trading process. This model enables the NPC in the game to continuously adjust their decisions during interactions, thereby improving the player experience. 1.2  Scope and Objectives 1.2.1      Scope - Develop a computational trust model for a market-oriented simulation game. - Design a lightweight dynamic trust value calculation framework. - Focus the main research on trust value calculation, and implement NPC decision logic as a secondary task. 1.2.2      Objectives -Design  a  confidence  and  reputation  calculation  algorithm  based  on  fuzzy  logic  and reputation aggregation. -Develop a dynamic trust value update mechanism that responds to player behavior. -Verify the accuracy and applicability of the model in a simulated environment 2  Literature Review 2.1     Related work What is a trust model? The method to specify, evaluate and set up trust relationships amongst entities for calculating trust is referred as the trust model. Trust modeling is the technical approach used to represent trust for the purpose of digital processing.[2] Marsh (1994) was one of the first scholars to formalize trust in computing systems. In his approach, he  integrated  various  aspects  of trust  from  disciplines  such  as  economics, psychology,  philosophy,   and  sociology.  Since  then,  many  trust  models  have  been constructed  for  various  computing  paradigms  such  as  ubiquitous  computing,  P2P networks, and multi-agent systems. [2][3] 2.2   Trust models in multi-agent systems Trust  computing  plays  an  important  role  in  multi-agent  systems,  providing  core theoretical  support for the modeling of complex interactions and intelligent  decision- making.[4] The  earliest trust  computing models were usually  static,  for  example,  assessing  trust values through  fixed rules  or  a  single  calculation  based  on past  data.  Dynamic  trust models   overcome  the   shortcomings   of  static   models   through   real-time  updating mechanisms.[5] In order to cope with the needs of multi-agent interactions in complex environments, multi-dimensional trust assessment models have gradually emerged. [6] For example, in addition to the traditional confidence and reputation dimensions, researchers have also introduced  factors  such  as  social  relationships   and  risk   assessment.  These  models improve  the  accuracy  and  applicability  of trust  values  by  weighting  and  fusing  the evaluation results of multiple dimensions. [7] Currently, the interest in trust models is not decreasing. We can see that the number of various models present in the literature is increasing. [8]. People can apply these models specifically to different scenarios to help the development of projects. I think my research on trust models is relevant. 2.3  Trust model selection Many models for calculating trust have been developed. For example, Marsh (1994) first proposed a formal calculation framework for trust, which laid the theoretical foundation for quantifying trust values. He believes that trust is a value between -1 and 1, and his calculation method considers the risk of interaction and the ability of the interaction partner to calculate. [9] However, these concepts are not given any precise grounding, and past experience and reputation values are not considered. Reputation symbolizes trust, and the level of ability is collected from the social network in which the  agent  is  located.  The  main value  of this model  is to use  reputation to symbolize trust, but this assessment is too simple. [10] There is also a probability method used to build the model, which takes into account past experience and reputation, but it does not significantly help in understanding the decision-making of the agent. [11] We have chosen a trust model based on reputation and confidence. For the first time, confidence and reputation are combined for trust modeling, providing a context-aware trust  calculation  method.  A   specific  algorithmic  framework  is  provided  to  support dynamic weight adjustment. [12]It can be used well in market simulation games. 2.4   Trust computing in games The application of the trust computing model in games is mainly reflected in two aspects: improving the intelligent behavior of NPCs and enhancing the interactive experience of players,  especially  in  real-time  interactions  in  dynamic   environments  and  complex decision-making  scenarios. For  example, market  simulation  games,  as  a  typical  open interactive scenario, provide a broad  space for the application of the trust computing model. The evaluation of player trust in games is also very important. The application of the trust computing  model  in  games  is  conducive  to  achieving  fairness  in  online  games  and reducing the spread of untrustworthy information among players. [13] 3  Project Plan 3.1  Proposed Solution / Methodology 3.1.1      data preparation 1. Direct Interaction Data (Confidence): - Get the player's actual performance on a specific issue from the historical case base CB. - Key information for each interaction includes: - Issue assignmentsO = {x1      = v1     , x2      = v2     , …}; - Execution results O ′ = {x1      = v1 ′   , x2      = v2 ′   , …}; - Timestamp  t - Use a utility functionux(v)to evaluate the utility of each issue value. 2. Indirect Interaction Data (Reputation): Reputation information collected from the social network of agents. 3.1.2       Confidence Calculation Confidence measures the reliability of the target agent based on direct interactions. The process includes: (1) Obtain the distribution of utility changes from historical data Extract  the  distribution  of utility  changes  for  a  issue  x  from  the  interaction  records ΔUx    =Ux    (v′)−Ux    (v)  ,  where  v is the value  agreed in the contract and v' is the actual implementation result. (2) Estimate Confidence Interval Determine  the  confidence  interval  [v− ,v+]    for  the  agent’s  possible  performance  on issue x based on historical data. (3) Fuzzify Assessment Map the confidence interval to linguistic labels L={Poor,Average,Good} and assign confidence levels C(x,L) to each label. (4) Compute Expected Value Range Using confidence levels, calculate the expected value range for issue x:   where  is the membership function for label L. (5)Calculate Maximum Utility Loss Within the expected value range, calculate the maximum utility loss:   (6) Derive Confidence Trust Value Based on the maximum utility loss, compute the trust value for issue based on confidence:   3.1.3      Reputation calculation The reputation value measures indirect information collected from other agents and is calculated as follows: (1) Obtain the reputation value distribution Rep(x, L) of the target agent on issue x from the social network, where L is the fuzzy set label. (2) Calculate expected value range Similar to confidence level, based on the reputation value distribution, calculate the expected value range for issue x.   (3) Calculate maximum utility loss Calculate maximum utility loss based on reputation value: (4) Calculate reputation trust value Calculate the reputation trust value of the target agent on issue x based on the maximum utility loss. 3.1.4         Combining confidence and reputation In practical scenarios, confidence and reputation are often used in combination.  The combination process is as follows: (1)Determine the weight ∣CB ∣ :Number of interactions in history. θmin     :: Confidence threshold, the minimum number of interactions that indicate that the confidence value completely dominates trust. (2) Calculate the comprehensive expected value range Combine confidence level and reputation to calculate the comprehensive expected value range for issue x.   Further information on the expected value range:   (3) Calculate the overall trust value Calculate the maximum utility loss based on the overall expected value range:   The final overall trust score is obtained: 3.1.5        Construction of the trust model (1)Confidence is the only source of trust (Trust = Confidence) In this case, only direct interactions are considered to be a valid source of information for measuring the performance of another agent. The first contract will be full of uncertainty, and this definition of trust will only work  effectively when there have been  enough interactions. Then the trust value for issue x is defined as: (2)Reputation as the only source of trust (Trust = Reputation) When the number of interactions is small, confidence cannot provide sufficient information, and reputation information may be more useful. This is a common situation The trust value of issue x is defined as: (3)Combining confidence and reputation (Trust = Confidence and Reputation) In most cases, it is more reasonable to combine confidence and reputation. The logic is that as interactions between agents increase, npc will become more and more dependent on their own confidence measurement, rather than the reputation information provided by others (because direct interactions are usually more accurate than indirect information). Finally, the trust value of issue x is defined as: Our definition of trust (especially the last approach) views trust as a dynamic and rational concept. 3.1.6       Decision-making framework: The comprehensive trust valueT(β, X) is compared with the preset threshold. If T(β, X) is greater than the preset threshold, the NPC accepts the player's transaction request. If T(β, X)is less than the preset threshold, the NPC rejects the transaction. 3.2  Experimental Design Flowchart Figure 1 Decision-making process flow chart 3.2.1     Testbed architecture Design a test platform. to simulate the interaction between NPCs and players, and test the NPC decision-making mechanism based on the trust model (confidence, reputation, and comprehensive trustworthiness) to ensure the rationality and dynamic adaptability of the decision-making. Figure2 Testbed architecture (1) Figure3 Testbed architecture (2) The testbed architecture has four components: the simulation engine, the database, the user interface, and the agent framework. The simulation engine is responsible for starting the game, controlling the simulation environment by adjusting parameters, and managing processes such as player requests, NPC trust calculations, and decision execution. The database stores environment and agent data. This testbed also provides the ability to record other data types in the database, as well as data replay and analysis tools. The user interface provides real-time visualization of NPC-player interactions, trust value changes, and decision results. Figure 4 shows a game monitoring interface. The agent skeleton is designed to allow the implantation of custom internal trust representations and trust revision algorithms.  The  Java  classes  that  define  the  agent skeleton implement all the necessary interfaces to allow agent-agent interactions (via the simulation  engine).  The  agent  skeleton  also  handles  coordination  tasks  with  the simulation engine, such as opinion formation and evaluation calculations. In the future, it may be possible to try to develop agent skeletons in other programming languages to provide more flexibility for agent designers. Figure4 Game monitoring interface. 3.2.2       Overview of the process 1.  The player  sends  a  request  to  the  NPC  (e.g.,  a  quest,  a  trade,  etc.).  The  player's behavior. can be designed to be honest, dishonest, or mixed. 2. The simulation engine assigns the request, and the NPC calculates the player's overall trustworthiness. 3. The NPC makes a decision to accept or reject based on the trustworthiness and the set threshold. 4. The player performs the task, and the simulation engine records the task result and updates the trust value. 5.  The  data  storage  module  records  the  interaction  data  and  provides  analysis  and playback functions. 3.2.3       Test indicators 1. Change in confidence trend: Whether the dynamic changes in confidence, reputation and overall confidence in different player behavior. patterns are as expected. 2. Decision  accuracy:  Consistency between the NPC's  acceptance/rejection  decisions and the player's actual behavior. 3.  Model  adaptability:  Whether  the  model  can  dynamically  adjust  trust  values  and decisions based on player behavior. patterns 3.3  Expected Results 3.3.1      Expected results NPC decision-making is highly consistent with player behavior. patterns: 1.   Honest   players:   gradually   increase   overall   trustworthiness   and   increase   NPC acceptance rate. 2.  Dishonest  players:   gradually  decrease   overall  trustworthiness   and  increase  NPC rejection rate. 3.   Mixed    players:   overall    trustworthiness   and    decision-making    show   dynamic fluctuations. 3.3.2       Data visualization: Plot the trust value change curve and decision-making distribution map to demonstrate the dynamic adjustment capability of the model. 3.4  Progress Analysis and Gantt Chart In the current school year project (FYP), the literature research on the application of multi-agent system trust models in simulated market games will be completed in October 2024, and a plan proposal will be submitted in November. From November to December, relevant knowledge and skills will be learned, relevant data will be collected, and trust models will be established and analyzed. From December 2024 to April 2025, specific experiments will be completed, the results will be evaluated and compared, and a first draft of the paper will be completed in the process. Finally, the paper will be revised and the defense will be completed. Figure5 Gantt chart 4  Conclusion This project  focuses  on  the  dynamic  trust  calculation  of NPCs  in  market  simulation games. By combining confidence and reputation, an efficient trust assessment framework is proposed. At the same time, its applicability and performance in dynamic scenarios are verified through experiments, providing a reference for further research in the field of game AI. However, this study is limited to accepting or rejecting NPCs decisions. The trust model mentioned in this paper can also guide NPCs to make more complex decisions, such as   modifying the content of a transaction.If I have the opportunity, I can study it further in  the future.

$25.00 View

[SOLVED] Principles of Logistics Management Assessment 2

Module Title Principles of Logistics Management Assignment Mode Group Project Word Count Limit Word limit, if any, 2000 words (+/- 10%) Citation Format APA Marks 100 marks Assignment Brief Choose a cake shop as your assignment topic. It can be an existing small business, or a start-up. Refrain from using companies that you have no association with and are located on the Internet. You are to ensure that you know the owner(s) or someone in the organization who can answer any questions posed to them. Your task in this assignment is to evaluate the logistics of the chosen company, applying the knowledge that you have learnt in this subject, whether the company’s logistics is achieving its objectives i.e., being responsive or efficient to its customers’ needs. The process of assessing the company’s logistics must meet the following requirements: 1.  Introduction (300 words) .   Brief background of the company. Explain how the company competes in its industry by looking at the competition, the players in the industry, and the regulations that govern them, customer expectations in the supply chain, and so on, which would justify the approach the company is taking. .   The statements that you make must be supported by evidence from credible and  independent references. In addition, base your assessment on evidence obtained from the owner(s) of the company themselves, or publicly available information. 2.  Order Fulfilment (500 words) .   Logistics creates value by delivering orders when and where they are needed. An important issue in order fulfilment then, is to reduce order cycle times. .   In that sense, consider and explain how well the company coordinates all the activities that comprise the order cycle. This would include the placing of production facilities in the right location, leveraging appropriate process technologies to streamline the order processing, and carrying the right quantity and mix of inventory to rapidly fulfill complete orders. 3.  Transportation Management (500 words) .   Transportation cost, availability, and reliability, play a vital role in logistics. Decisions such as modal choice, carrier selection, and transportation routing should, therefore, be considered carefully. .   The modal selection decision is complex, requiring careful costing and trade- off analysis. Explain how the company understands its logistical network, to identify opportunities to leverage logistical expertise, so that it meets its customers’ needs and requirements. 4.  Distribution Management (500 words) .   Warehousing performs a vital storage function – decoupling manufacturing from consumption, thereby making the company’s products available when it is needed. .   Assess the distribution network that the company has, to achieve its objective of being responsive or efficient in meeting the needs of its customers. This would require evaluating the number, location, ownership, and automation of warehouse operations. This would also include a consideration of the distribution network as demand patterns change. 5.  Conclusion (200 words) .   Summarize the key insights. Provide any recommendations and practical implications to the case company. Instructions on Submission 1.   Referencing .   All statements of fact or other sources, quoted in the essay, including any diagrams, must have in-text references, with a full reference list provided at the end of the assignment, according to the APA 7 system of referencing. .   You are required to fully reference a MINIMUM of 10 references for group project (e.g., from books; journal articles from the full-text databases; current affairs magazine; newspaper, etc.). The use of WIKIPEDIA online encyclopedia is NOT allowed. 2.   Formatting .    Write your name, ID number, module title and word count clearly on the cover page. Your assignment should be A4 word-processed, with a spacing of 1.5 and a font size of 12 Arial. .    Table of content with page numbering .    Word Count (+/- 10%) exclude Cover page, Table of Content, Appropriate tables or illustrations or referencing. 3.  Policies 4.  The penalties for plagiarism and collusion are governed by the Academic Policy of KHEA. The detailed policy information can be found in the Student Handbook. 5.  The assignment must be submitted online (LMS) on the specific due date. Assignments must be submitted via Turnitin. Any late submission will have marks deducted in accordance with the KHEA's late submission policy.        

$25.00 View

[SOLVED] MUSA 650 Homework 1 Basics of Machine Learning

MUSA 650 Homework 1: Basics of Machine Learning In this assignment, you’ll explore fundamental machine learning concepts and techniques, with a focus on data preprocessing, image manipulation, and model evaluation. You are responsible for figuring out the code independently and may refer to tutorials, code examples, or use AI support, but please cite all sources. Submit a single Jupyter Notebook containing code, narrative text, visualizations, and answers to each question. Open a pull request from your fork of this repository to the main repository for submission. Important Notes ● Sample Size Considerations: If experiments take too long with the complete dataset, start with a smaller sample for timely execution. For your final submission, use the full dataset if feasible, but if processing is still too intensive, note your sample sizes clearly. Sample size variations will not affect grading if documented appropriately. ● Data Reshaping: To switch between 2D and 1D representations, use functions like numpy.flatten() or numpy.resize() as needed. 1. Data Exploration Load the mnist dataset using the following code, which contains all of the module imports needed for this assignment: import pandas as pandas import numpy as np import matplotlib.pyplot as plt import keras from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() 1.1 Dimensionality What is the type of the training and testing datasets? How many features are in the training dataset? The testing dataset? How many samples are in each dataset? If an array has a shape of (100, 28, 28), what does each number represent in the context of image data (i.e., which number represents the number of images, and which represent the number of pixels?), and how would it change if you flattened it to a 2D array? How would you convert a 3D array into a 2D array without changing the total number of elements? Describe how flatten() and reshape() can be used for this purpose. Explain why it’s necessary to reshape data when transitioning from raw images to model input, particularly in neural networks. What are the implications of reshaping an image array into a vector (1D array) for each sample? (Feel free to turn to Google for this, as long as you cite your sources.) 1.2 Visualization Select one random example from each category in the testing set, display each 2D image, and label it with the corresponding category name. 2. Data Processing 2.1 Subsetting Create a 10% random subset of each training and testing set. What is the distribution of each label in the initial train data? What is the distribution of each label in the reduced train data? Now subset the first 10% of each training and testing set. What is the distribution of each label in the initial train data? What is the distribution of each label in the reduced train data? When reducing dataset size, what differences might you expect to see in results between randomly selecting samples versus selecting the first portion of the dataset? Is this borne out by the subsets you just created? How does the distribution of the labels in the various subsampled datasets compare to the distribution of the full datasets? Why might subsampling a dataset be beneficial when developing machine learning models? Discuss the trade-offs. 2.2 Feature Engineering What are the features versus the output in this assignment? Why is it important to distinguish between features (inputs) and outputs (labels) in a machine learning model? Select all train images labeled "3". Create a single, pixel-wise average image of all of these images. Plot the 2D mean and standard deviation images for category 3 in both the training and testing sets. Comment on the differences between the mean and standard deviation images between the training and testing datasets. Plot the 2D mean and standard deviation images for category "3" in the training and testing sets for the binarized images. Now repeat this for a new label (e.g., "7"). Comment on the differences between the mean and standard deviation images between the training and testing datasets for the binarized images. Binarize both of the images from the previous question by setting pixel values equal to 1 if they are greater than the mean value for that pixel and equal to 0 if they are less than the mean value for that pixel. In plain English, what are we actually doing when we binarize an image? How does the new pixel value relate to the pixel value of the original image and the mean value for that pixel across all images with that label? What is the index of the most dissimilar image in category "3" in the training set for the regular images? What about the most similar image? Does this change for the binarized images? If so, why? Make sure to plot all four images with approproate labels. What do you think the effect of binarizing these images is from a machine learning perspective? How does binarization of images (converting pixel values to 0 or 1 based on a threshold) affect the representation of features, and what might be the benefits and limitations of this approach? How does what you've just done relate to the idea of standardizing data? Why might it be important to standardize our data before using it to train a model? Describe how calculating a pixel-wise mean or standard deviation for a set of images can help you understand variations within a category. What does a high standard deviation indicate in this context? 3. Model Training, Validation, and Intepretation 3.1 Support Vector Machine From the training dataset, select only images from categories "3" and "9".Subdivide the data into Set1 and Set2, with 60% of the data in Set1 and 40% in Set2. Replace category labels with 0 for 3 and 1 for 9. Use Set1 to train a linear support vector machine classifier with default parameters and predict the class labels for Set2. What is the prediction accuracy using the model trained on the training set? What is the prediction accuracy using the model trained on the testing set? 3.2 Modeling with Engineered Data We describe each image by using a reduced set of features (compared to n = 784 initial features for each pixel value) as follows: ● Binarize the image by setting the pixel values to 1 if they are greater than 128 and 0 otherwise. ● For each image row i, find n_i, the sum of 1's in the row (28 features). ● For each image column j, find n_j, the sum of 1's in the column (28 features). ● Concatenate these features to form. a feature vector of 56 features. What is the prediction accuracy using an SVM model trained on the training set? What is the prediction accuracy using an SVM model trained on the testing set? How about the prediction accuracy of a KNN model trained on the training set? And on the testing set? What does this tell you about the potential impacts of feature engineering? 3.3 K-Nearest Neighbors In the training and testing datasets, select images in the categories 1, 3, 5, 7, and 9. Train a k-NN classifier using 4 to 40 nearest neighbors, with a step size of 4. For k = 4, what is the label that was predicted with lowest accuracy? For k = 20, what is the label that was predicted with lowest accuracy? What is the label pair that was confused most often (i.e., class A is labeled as B, and vice versa)? Visualize 5 mislabeled samples with their actual and predicted labels. Based on the patterns in the pixel values for each category, which labels (numbers) do you think the model might struggle to identify or distinguish from one another? Explain why certain labels might be more challenging to separate, considering the similarity in pixel patterns or shapes. 3.4 Comprehension Questions Why is it important to have separate training and testing datasets? What potential issues arise if you use the same data for both training and evaluation? If you achieve a high accuracy on the training set but a lower accuracy on the testing set, what might this indicate about your model’s performance and generalization?

$25.00 View

[SOLVED] ESS105H1 Rare Earth Elements the Hidden Gem of Drive Technology Conflict and A Green FutureR

  Rare Earth Elements: the Hidden Gem of Drive Technology, Conflict and A Green Future  ESS105H1 In today's world of rapid technological development, we have long become accustomed to high-performance electronic devices, convenient means of transport and ever more advanced military technology. However, few people realize that all of this relies on a group of key metals known as ‘rare earth elements. Rare earth elements (REEs) include the lanthanides (atomic numbers 57–71) as well as scandium (Sc) and yttrium (Y). Although they are not rare in the earth's crust, their placements are so dispersed, that they are called rare. Rare earth elements are important because of their unique physical and chemical features, such as excellent magnetism, conductivity of electricity, and optical properties. (Balaram, 2019). These features allow rare earth elements to play the role of ‘vitamins’ in the high-tech industry. From the Bayan Obo mining region in Inner Mongolia, China, to military laboratories in California, USA, rare earth elements have not only driven technological progress, but also influenced global environmental policy and geopolitical landscapes. This article will explore in depth why rare earth elements are known as the ‘new oil’ of the 21st century, from the scientific basis of rare earth elements, to innovations in mining technology and environmental impacts, to their importance in the international political and military arenas. (Jordens, 2013). Sub-theme 1: The basics of rare earth elements – not rare, but hard to find Although rare earth elements are widespread in the earth's crust, they are often dispersed in various minerals at extremely low concentrations, making it difficult to extract them cost-effectively. Rare earth elements include 15 lanthanides, as well as scandium and yttrium, which are similar in nature. These 17 elements are irreplaceable in the high-tech industry. Neodymium (Nd), for example, is widely used to make high-performance permanent magnets, which are a key component of electric vehicle motors and wind turbines. Europium (Eu) is a key material for the bright red light emitted by LEDs and fluorescent screens. Terbium (Tb) improves the brightness and clarity of liquid crystal displays (LCDs), while lanthanum (La) is used in optical devices and camera lenses to enhance image quality. The geographical distribution of rare earth elements is markedly uneven. More than 70% of the world's proven rare earth reserves are concentrated in China, particularly in the Bayan Obo mining area in Inner Mongolia. The Bayan Obo deposit is one of the world's largest known rare earth deposits. Its formation is closely related to deep magmatic activity, and the ore is rich in many rare earth elements. However, in addition to China, Australia's Mount Weld, the Mountain Pass Mine in the United States, and the Amazon Basin in Brazil are also significant reserves of rare earth resources (U.S. Geological Survey, 2023). Most of these deposits were formed by complex geological processes such as magmatic activity, hydrothermal action, and weathering and leaching. These factors all contribute to the dispersibility of the rare earth elements in the geological environment, making mining and extraction more difficult. To make this difficult situation worse, rare earth minerals often occur in association with the radioactive elements thorium (Th) and uranium (U). This can lead to radioactive contamination during mining, posing a potential threat to the environment and human health (Jordens et al., 2013). In recent years, with the growing global demand for rare earths, countries have increased their exploration and development of rare earth resources. However, the mining of rare earth resources is not just a technical issue, but also involves aspects of environmental protection, social ethics, and international relations. How to achieve the efficient use of rare earth resources while protecting the ecological environment has become an important challenge the world is facing. Sub-theme 2: Mining technology – innovation from pollution to environmental protection   REEs play an important role in the high-tech industry, but traditional extraction methods often use toxic organic solvents, which pose environmental and health problems. (Neves, 2022). Traditional mining techniques for rare earths mainly include open-pit mining and acid leaching. Although open-pit mining can quickly obtain large amounts of ore, it often destroys large areas of land, resulting in the loss of vegetation, soil erosion, and ecosystem damage. In addition, rare earth deposits often contain radioactive elements such as thorium and uranium, which can easily be released into the environment during the mining process, causing radioactive pollution. On the other hand, acid leaching involves dissolving rare earth elements from the ore by using strong acids such as sulphuric acid and hydrochloric acid. However, this process produces a large amount of toxic waste in both liquid and gas forms, which pollutes the soil, water, and air. (Vanth et al, 2020). For example, the complex geochemical characteristics of the Bayan Obo deposit make the radioactive elements released during mining a great threat to the environment and the health of surrounding residents. Faced with the serious environmental problems caused by traditional mining methods, scientists are constantly exploring more environmentally friendly and efficient rare earth elements extraction technologies. Bioleaching technology is an environmentally friendly mining method that has received a lot of attention in recent years. This technology uses acidophilic bacteria (such as iron (II) oxide-sulfur bacteria) to decompose ores and release rare earth elements from the ores. These bacteria have been genetically modified and optimized for adaptation, and are already able to work efficiently in high metal concentrations. They show great potential for use, especially when processing waste such as electronic waste, they can greatly reduce environmental pollution (Anaya-Garzon et al., 2021). Another important environmentally friendly technology is ionic liquid extraction. Traditional solvent extraction methods often require high temperatures and pressures, and the use of large amounts of organic solvents, which are energy-intensive and pollute the environment. Ion liquids are a new type of environmentally friendly solvent with extremely low volatility and high thermal stability. Research has found that the use of the ionic liquid [C4mim] [NTf2] for the extraction of rare earth elements can significantly improve the efficiency for separating the elements, which is 30% higher than that of traditional methods, while greatly reduce the impact on the environment. In addition to innovations in mining technology, rare earth recycling is also an important way to achieve sustainable development. The concept of ‘urban mining’ has emerged, which is to reduce the dependence on raw mineral resources by recycling and reusing rare earth elements in waste electronic equipment. Dowa Holdings in Japan extracts about 200 tons of rare earth elements from electronic waste such as discarded mobile phones, computers and batteries every year, successfully reducing the import demand by 10% (Harper et al., 2020). This circular economy model not only reduces the waste of rare earth resources, but also significantly reduces the risk of environmental pollution, providing new ideas for the sustainable use of global rare earth resources. Environmental justice is also an important issue that cannot be ignored in the process of achieving environmentally friendly mining. Despite the enormous potential of ionic liquids, industrial applications still face challenges such as cost, recovery and ecotoxicity. (Liu et al, 2012). Taking the Cree of the Eeyou Istchee region in Canada as an example, they have cooperated with mining companies to participate in resource development while respecting the land and culture. For example, the use of drones to map mining areas has replaced traditional large-scale blasting, greatly reducing ecological damage. At the same time, tailings ponds are located away from water sources to avoid water pollution. These practices not only protect the local natural environment, but also safeguard the cultural and economic interests of indigenous peoples, fully demonstrating the positive role of scientific and technological progress in reducing social and environmental conflicts (Vanthuyne and Gauthier, 2022). Sub-theme 3: Geopolitics and military applications – the new oil scrambles The strategic importance of rare earth resources extends far beyond the technological and economic spheres. In international geopolitics, rare earths are considered the ‘new oil’, and the stability of their supply chain is directly related to national security. In 2010, China implemented rare earth resource export quota restrictions and industrial restructuring policies, aiming to shift rare earth resources from primary product exports to high value-added downstream industries such as the manufacture of permanent magnets and new energy equipment. This policy immediately attracted global attention and concern, especially from major rare earth elements’ consumers such as the EU, the United States, and Japan, who became aware of the potential risks of over-reliance on a single supplier (Wübbeke, 2013). To meet the growing future demand for critical rare earth elements, it is necessary to diversify supply sources and reduce dependence on a single country or region. This requires global cooperation and investment to ensure the stability and security of the rare earth element supply chain. The EU has formulated a series of strategies to ensure the security and stability of the rare earth supply chain. The European Commission has made it clear that rare earth elements are of irreplaceable importance to Europe's economic development and strategic security. In order to reduce its dependence on rare earth suppliers such as China, the EU has proposed measures such as strengthening cooperation with other countries that are rich in rare earth resources, promoting the research and development of rare earth elements’ alternative materials, and strengthening the circular economy and resource recovery. By diversifying the supply chain and increasing resource recovery rates, the EU hopes to achieve autonomy and control over rare earth supplies (von der Leyen, 2022). The United States is also not far behind, and they are actively taking measures to ensure the security of rare earth supply. On the one hand, the United States has increased its exploration and development of local rare earth mineral resources, resumed production in the Mountain Pass mining district, and invested heavily in technology research and development. On the other hand, the United States has established strategic cooperation with allied countries such as Australia and Canada to ensure a stable supply of rare earth resources. This tendency towards resource nationalism is triggering a new round of competition for rare earth resources on a worldwide level. Conclusion The importance of rare earth elements as key materials that drive modern technology is self-evident. However, the mining, utilization, and management of rare earth resources not only affects technological and economic development, but also environmental protection, social justice and international politics. As citizens, we can contribute to the sustainable development of rare earth resources in a variety of ways, for example, by supporting the government in formulating stricter environmental protection regulations for mines, choosing products that promise to recycle rare earth elements, and even joining organizations that monitor companies and advocate transparency for relevant regulations. Only by combining public awareness of environmental protection with green technological innovation can the fair utilization and sustainable development of rare earth resources be achieved, creating a better tomorrow for our technological future and the sustainable ecology of the planet.

$25.00 View

[SOLVED] Capturing Love - Portfolio package details and testimonialsJava

Topic:The business is Wedding Photography the business name is Eternal Moments Photography and the topic is Capturing Love - Portfolio, package details, and testimonials. Design and develop at least a five page website about the business you have selected. You are given creative freedom to choose the pages that you feel are appropriate for your business with only a few specific requirements ? Suggestions for additional pages may include About Us, Services/Products, Testimonials, Blog, etc. Content ? Give appropriate filenames, titles, headings (and subheadings if required) related to their content. Use at least three pieces of media (image/video/audio). ? There should be one media page which showcases some photos/videos related to your business. ? The media page must have some interactive visual features such as thumbnail images that display larger versions when clicked. Header ? To identify the website as your dedicated website, provide a relevant heading / title banner and/or image/logo Navigation ? There should be a clear, intuitive and consistent navigation structure on the website General Requirements *   You are not allowed to use any existing templates, or frameworks such as bootstrap etc. *   You are expected to create your own website from scratch using HTML, CSS and JavaScript. Or if you have used Wordpress then you need to inform. me in advance. *   All text should follow the rules of writing for the web. *   Images, sound, and other media file sizes optimized for download and display. *   Well-designed unique and creative websites will be awarded appropriately. HTML Requirements ? All your html files must use HTML5 syntax. ? The structure of your website should be built using HTML5 tags styled with CSS where applicable. Key Functionality: You need to include the following functionalities in your website. 1. User Authentication: a. Users can register, log in, and log out. b. Implement different user roles with varying access levels (e.g. admin, member, or normal ). c. Securely store and manage passwords in the database. 1.    Database Setup: a. Design a database schema, create appropriate tables and store the relevant data required to demonstrate web application functionality. b. The database should have appropriate data types and relationships. c. Use PHP to connect to the MySQL database and perform. CRUD operations. d. Implement at least two forms for data input with proper validation. 2.    Dynamic Web Design: a. Utilize HTML5, CSS3, and web media for a visually appealing and user-friendly design. b. Implement error handling and validation for user input in web forms. c. Ensure responsiveness and cross-browser compatibility. You can choose the backend functionalities to implement, but they must meet all the outlined requirements and align with your chosen business. For example, an online bookstore could include user authentication, product inventory, and order management.

$25.00 View

[SOLVED] EN4062/ENT794 Advanced Robotics Coursework 1

EN4062/ENT794 Advanced Robotics - Coursework 1 Submission: Please submit on Learning Central (Turnitin) electronically before the deadline. Deadline: Thursday, 10/04/2025 at 5:00 PM (There will be NO Extension). •    This element of coursework constitutes 10% of the complete module assessment. •    Page limit: No more than 4-5 pages, a single column page format ( maximum 600-700 words limit), including references and appendix (if any) • Use of Generative AI (Gen AI) o For this module, Gen AI usage is permitted at the AMBER level with strict limitations (refer to the  "AMBER  Generative  AI  Guidance  for  Students  –  Individual  Work"  on Learning Central). o The focus of this coursework is on creativity, critical problem-solving, and originality. Your design should be novel, interesting, and different, and you must defend your ideas with strong reasoning. • Do NOT use Gen AI to generate ideas directly—your innovation and unique approach are key to higher marks. • Allowed Uses: o Understanding coursework instructions. o Improving the structure, style, and tone of your own writing. o Summarizing published work (without violating copyright). • Strictly Prohibited: o Using AI to generate ideas, design concepts, or report content. o No-user-input generation of texts/materials • 、Academic  Integrity: You must   attach  the   "Academic  Integrity Declaration" as  a coversheet for your submission. Misuse of Gen AI may lead to academic misconduct penalties. Design a conceptual autonomous single-legged hopping and rolling robot with the ability to move in 3D space. The robot should be agile and capable of fast manoeuvres on the ground while balancing itself upright. Rolling means the robot is not only able to hop/jump [1] but also roll/rotate on the ground [2],  so  touching  area  on  the  ground  is  important.  Discuss  the  details  of the  required  hardware components and their locomotion process. In your report, you need to cover the following items: i.      What hardware components  (e.g.  sensors, actuators, electric equipment etc) are required to build this robot? Please show a sketch/drawing/design of the robot with a description. ii.      What is the role of each component? Explain how they support the robot’s locomotion with a diagram. Explain in detail the locomotion process (stages) and the required controllers for the robot to move from one place to another. iii.      Assume that the robot needs to move from point A (on the floor) to the target point B (on a desk with a 50 cm height) in the same room. The room is equipped with an accurate motion-tracking optical system [3], which covers the whole area of the room and can track and return the 3D position of small markers. The markers can be attached to the robot itself, point A, point B and wherever needed on the trajectory. The tracking system has the ability to communicate data with the robot in real-time. Explain the required steps to enable the robot to autonomously reach the target point moving on an arbitrary trajectory. The trajectory can be dependent on both the rolling and hopping motions. iv.      Discuss  the  potential   applications  of  hopping,  rolling,   and  flying  robots   compared  to conventional robots. This analysis should consider their limitations in navigating challenging environments like compact jungles using LiDAR, GPS, or other technologies when the operator is not near the robot. Support your discussion with insights from the Week 7 session on Robot Navigation by Dr. Raphael Grech and Dr. Seyed Amir Tafrishi. Note: Check the internet to find related robot mechanisms. The more creative your design is, the higher your mark ( up to +10 extra mark) might be. ● However, you should defend your idea with good reasoning, clear presentation and supportive statements (referencing). References [1] Raibert MH, Brown Jr HB, Chepponis M. Experiments in balance with a 3D one-legged hopping machine. The International Journal of Robotics Research. 1984 Jun;3(2):75-92. [2] Armour, Rhodri H., and Julian FV Vincent. "Rolling in nature and robotics: A review." Journal of Bionic Engineering 3.4 (2006): 195-208. [3] Example, https://www.optitrack.com/

$25.00 View

[SOLVED] PRACTICUM I

PRACTICUM I Nota Bene Read the instructions full at least twice from beginning to end before you get started. Complete the practicum individually; while you may discuss your approach and share results, you may not share code or collaborate on the code. All of the work must be your own. The due date is March 11 at 11:59pm ET. Late submissions are accepted (with the usual per day late penalty) until March 16 at 11:59pm ET. No submissions are accepted after this date. Work on the practicum for at least three hours every day and use the time during the week prior to the practicum to start working on it, especially the configuration of the MySQL Server and the loading of    the data. Do not wait to get started. The average time to complete the practicum is 18-20 hours. Seek help early. Submit often and as soon as you have enough code that works. We will only grade the last submission. Check your submission before you submit and after. A gentle reminder that the average of both practicums must be above 70% to pass this course, so be sure to complete on time and seek help right away. Do not procrastinate -- things that appear simple often take more time than expected and, of course, programming is fraught with potholes on the road to success. So plan accordingly. Do not wait until  shortly before it is due. Learning Objectives In this practicum you will learn how to: · configure and connect to a cloud-hosted MySQL from R · implement a normalized relational schema for an existing data set · load data from CSV files into a relational database through R · perform. simple analytics with SQL in R using literate programming Overview In this practicum you will build a database to analyze restaurants visits, revenue, and sales transactions. For an existing data set (generated synthetically), you will build a logical data model, a relational schema,realize the relational schema in a MySQL/MariaDB relational database, load data into the database, execute SQL queries, a finally perform. some simple analysis of the data. More specifically, you will create a (normalized) relational database that has a structure that can accommodate data contained in one (or more) CSV files that may be the result of a data dump from an external party, another database, or generated in some other way. This relational database will be a MySQL (rather than a SQLite) database that is hosted on a cloud-sever so it becomes accessible from anywhere. While we recommend you use Aiven to host the MySQL database, you may use any cloud- based MySQL database server. You will then load the data from the CSV files into the various (normalized) tables in the cloud-based MySQL database. After that the CSV files are no longer used and all of your work and queries will be done against the database. In practice, it is more likely that you will write a program or SQL script. to create the database tables, a different program to load the data from the CSV files into the database, and then write other programs to use the data in the database. In this practicum we are simulating this pipeline by writing multiple R programs as well as R Notebooks. Read all practicum instructions first; the questions are not necessarily sequential as the process is iterative. So, if a later question requires fields you didn't build in when you first created the table, then go back to the previous code, update the code to create the correct table, and then re-build the table (drop  it,then create again). We will grade the database and not the sequence of the creation. Use the provided time estimates for each tasks to time-box your time. Seek assistance if you spend more time than specified on a task -- you are likely not solving the problem correctly. A key objective is to learn how to look things up, how to navigate complex problems, and how to identify and resolve programming errors. Read the Hints & Tips section frequently and before posting questions. Key Resources & Prerequisite Lessons 06.103 ┆ Working with Vectors and Data Frames in R (http://artificium.us/lessons/06.r/l-6-103-vecs-and-dfs/l-6-103.html) 06.106 ┆ Import Data into R from CSV, TSV, and Excel Files (http://artificium.us/lessons/06.r/l-6-106-load-csv-tsv-excel-files/l-6-106.html) 06.108 ┆ Loops and Iteration in R (http://artificium.us/lessons/06.r/l-6-108-loops-iteration-in-r/l-6-108.html) 06.112 ┆ Basics of Text & String Processing in R (http://artificium.us/lessons/06.r/l-6-112-text-proc/l-6-112.html) 06.121 ┆ Writing Functions in R (http://artificium.us/lessons/06.r/l-6-121-funcs-in-r/l-6-121.html) 06.204 ┆ Literate Programming with R Notebooks (http://artificium.us/lessons/06.r/l-6-204-r-notebooks/l-6-204.html) 06.191 ┆ Debugging R Code (http://artificium.us/lessons/06.r/l-6-191-debugging/l-6-191.html) 06.301 ┆ Working with Databases in R (http://artificium.us/lessons/06.r/l-6-681-key-value-db-redis-from-r/l-6-301.html) 06.302 ┆ Bulk Load Data from CSV into Database in R (http://artificium.us/lessons/06.r/l-6-302-bulkload-data-into-db/l-6-302.html) 06.306 ┆ Dates in R and SQLite (http://artificium.us/lessons/06.r/l-6-306-dates-in-r-and-sql/l-6-306.html) 70.907 ┆ Stored Procedures in MySQL (http://artificium.us/lessons/70.sql/l-70-907-stored-procs-mysql/l-70-907.html) Preliminary Tasks & Requirements Before you start with the tasks below, read the Hints and Tips section below and go back to them often when you encounter problem. Most problems are addressed in that section. Consult the list before contacting us for help as you'll be able to resolve the issue more quickly. 1. Create a new project in R Studio named "CS5200.Practicum-I.LastNameF" where LastName is your last name and F is your first initial, e.g., "CS5200.Practicum-I.SmithJ". 2. Download the CSV file restaurant-visits-139874.csv (https://s3.us-east- 2.amazonaws.com/artificium.us/datasets/restaurant-visits-139874.csv)and save it locally to your R Project folder. For development, you may use the local file, but your final code submission must load the data from the URL. You may wish to create a new data file that is a subset of the full data that you use for development so loading takes less time or only load the first 50 or 100 rows. This is a common strategy in practice. To download the file, use the right mouse button and choose "Save Link As..." or a similar menu choice on your browser. Do not click on the link as that may cause the browser to try to display the file which is unlikely to work. The data in the CSV has been artificially generated, so the data is synthetic. If you are interested in the creation process, see 3.981 -- Synthetic Engineering of a Dataset on Restaurant Visits (http://artificium.us/lessons/03.ml/l-3-981-wk-exmpl-synth-restaurant-visit-data/l-3-981.html) . 3. Inspect the CSV data file that you downloaded so you are familiar with its columns, data types, and overall structure. You may wish to create a "sandbox" R Notebook in which you can do your inspection; you do not need to submit this notebook. 4. In R Notebooks, all R and SQL code blocks must be named, as shown in the example below. This is necessary so that you can reference your code blocks in the self-evaluation rubric to be filled out at  the end of the practicum. The names of code blocks must be unique. ```{r nameOfRCodeBlock, eval = T, warning = F} ```{sql nameOfSQLCodeBlock, connection = xDB} You may add any additional block parameters as needed. Code blocks should be echoed (displayed) in your knitted result documents as instructed. 5. Use functions to structure your code so that it becomes more readable and easier to develop and debug. Use headers to segment your notebook and add explanations as to what each code block means. Follow common coding practices and format your code so it is readable, and use functions to breakdown complex code. Echo all your code (although this is not always the right thing to do in practice). Do not print large data frame. or query results -- print only a few rows. The order of the questions may not necessarily suit the structure of your code, so you can answer questions out of order. Part A / Configure Cloud Database All of your data must be stored in a cloud database, so the database is accessible to not just you. We will use a MySQL database hosted on a cloud provider of your choice, although we recommend Aiven. While Aiven has some restrictions, we found it to be an easy-to-configure and use database cloud provider. 1. (0 pts / 2.0 hrs) Set up and configure a MySQL Server database on a cloud host. There are several options and you may choose any of the ones below or any other of your choosing; we recommend Aiven: a. db4free.net (http://db4free.net/) b. freemysqlhosting.co.uk (http://www.freemysqlhosting.co.uk/) c. Aiven (http://aiven.io) d. Google Cloud or AWS RDS Note that you may use SQLite instead of MySQL but you will not get credit for this question nor credit for the question below that asks you to create a stored procedure, as those are not supported in SQLite. If you have difficulty setting up a cloud database, you should use SQLite and continue and resolve the cloud database setup once you've completed all steps of the practicum that are possible with SQLite; that way you do not lose any time. CAUTION: Both Aiven and db4free do not allow the use of dbWriteTable() ; this approach of bulk-loading does not scale, so it should be avoided anyway. So, you must use INSERT SQL statements to write the data into Aiven and db4free. In addition, for Aiven, all text fields must beenclosed in single rather than double quotes. Both Amazon AWS and Google Cloud offer free credits, but be sure to monitor usage so you do not  exceed the free credit or are prepared to pay (they can be a bit costly, so be careful to suspend your database when not in use and delete it after the Practicum has been graded). You may collaborate and work with others to set up a cloud MySQL installation for Part A, but not for the remainder of the practicum. A cloud MySQL installation is necessary for us to run your code -- we cannot connect to a local installation of MySQL, although you may use one for testing and development. If you cannot set up and connect to a MySQL cloud-host, contact us and we will provide you with an Aiven database. Related Lesson: 6.304 ┆ Configure and Connect to Cloud MySQL from R (http://artificium.us/lessons/06.r/l-6-304-cloudMySQL-from-r/l-6-304.html) 2. (0 pts / 20 min) Write a small "sandbox" R program to connect to your newly created MySQL database to test the connection. You can use this connection code in Part C below. You do not need to submit this test program. If you have trouble connecting to your cloud MySQL, be sure to disable any firewall or anti-virus software that maybe clocking port 3306 -- or add port 3306 to the list of open ports in your firewall software configuration. Part B / Design Normalized Database The data in the CSV file contains information about visits to restaurants owned by a restaurant management group. The file is the result of a "data dump" from a third-party system that the restaurant group wants to replace with an internally-built database and web application. The first step is to design a relational database schema that can hold the data. The schema must be normalized to at least 3NF. Follow the steps below in your database design and record the steps in a document (you may use any   document writing tool of your choice). Be sure to add your name, course name, semester, and the exact questions you are answering. Be professional in your preparation. Create an R Notebook named "designDBSchema.PractI.LastNameF.Rmd" where LastName is your last name and FirstInitial is the first letter of your first name in which to write your answers for your design; use embedded LaTeX for the equations and functional dependencies. When knitted, we should only see your design and normalization approach, but no code (as there is no code in this part). 1. (5 pts / 1 hr) For the relation represented by all of the columns in the CSV file, define all functional dependencies and list them. 2. (5 pts / 1 hr) Using the functional dependencies and the rules of normalization, decompose the relational from the CSV into several relations that all satisfy 3NF; give the relations reasonable names. 3. (5 pts / 1 hr) For the relations resulting from the normalization, create an ERD in the IE (Crow's Feet) notation. Add all attributes, attribute name, primary and foreign keys, data types, and entity descriptions. You may use any modeling tool of your choosing, e.g., LucidChart or mermaid. Embed the ERD into your document either an an embedded object, a rendered mermaid graphic, or an externally hosted image. If you render to HTML, then you must host any image on a server, but if you knit to PDF, then the image becomes embedded in the document. Part C / Realize Database 1. (0 pts / 10 min) Create an R Script (R Program) named "createDB.PractI.LastNameF.R" where LastName is your last name and FirstInitial is the first letter of your first name, e.g., "createDB.PractI.GilesM.R". Add a header comment containing the name of the program, your name, and current semester. 2. (0 pts / 10 min) In your R Program connect to your cloud-hosted MySQL database. If you have difficulty connecting to or setting up MySQL, then use SQLite and proceed. You can always come back to this question and change your configuration so that you connect to MySQL. This is the benefit of relational databases: you can easily switch between databases without changing your code. 3. ( 10 pts / 1.5 hrs) Using R, realize the 3NF normalized database schema designed in Part B. Add appropriate constraints, primary key and foreign key definitions. Add either lookup tables for categorical fields that are not Boolean or constraints. After creating the schema, be sure to disconnect from the database. Be sure to only create tables if they do not already exist. Add appropriate default values for each column. Add constraints as appropriate and allow NULL values only when appropriate. Part D / Delete Database 1. (0 pts / 10 min) Create an R Script (R Program) named "deleteDB.PractI.LastNameF.R" where LastName is your last name and FirstInitial is the first letter of your first name, e.g., "deleteDB.PractI.GilesM.R". Add a header comment containing the name of the program, your name, and current semester. 2. (4 pts / 30 min) In your R Program connect to your cloud-hosted MySQL database and delete (DROP) all tables if they exist. You can use this program to "re-initialize" your database, before you create the schema again using the program from Part C. Part E / Populate Database 1. (0 pts / 10 min) Create an R Script (R Program) named "loadDB.PractI.LastNameF.R" where LastName is your last name and FirstInitial is the first letter of your first name, e.g., "loadDB.PractI.GilesM.R". Add a header comment containing the name of the program, your name, and current semester. 2. (1 pts / 10 min) Load the data from its CSV file into a dataframe called df.orig. For now, you may load it locally from your project folder or load a subset of the data locally, but eventually the data must be loaded from the URL, so be sure to change this before submission. 3. (30 pts / 6 hrs) Using the table definitions from Part C and the data in the dataframedf.orig from above, write R code to populate the tables with the data from the appropriate columns. Load all data. Use appropriate default values for missing values. Note that some missing values have a "sentinel value", e.g., 99 is used when the party size is not know or "0000-00-00" is used for some missing dates. There are other such values; identify them and handle them as you believe will work and is reasonable. After loading the data, be sure to disconnect from the database. All data manipulation and importing work must occur in you R Script. You may not modify the original data outside of R -- that would not be reproducible work. It may be helpful to create a  subset of the data for development and testing as the full file is quite large and takes time to load. Part F / Test Data Loading Process 1. (0 pts / 10 min) Create an R Script named "testDBLoading.PractI.LastNameF.R" where LastName is your last name and FirstInitial is the first letter of your first name, e.g., "testDBLoading.PractI.GilesM.R". Add a header comment containing the name of the program, your name, and current semester. 2. (5 pts / 1.5 hrs) In this R program, load the original CSV into a dataframe and also connect to your database. Then perform. the following tests with appropriate messages indicating whether the results are what is expected: count the number of unique restaurants, customers, servers, and visits in the CSV and compare against the total number of rows in the appropriate tables. Sum up the total amount spent on food, alcohol, and tips in the CSV and compare against the same in the database. Part G / Use Data for Reporting & Analytics 1. (0 pts / 10 min) Create an R Notebook (.Rmd file) named "RevenueReport.PractI.LastNameF.Rmd" where LastName is your last name and FirstInitial is the first letter of your first name, e.g., "RevenueReport.PractI.GilesM.Rmd". Use "Analyze Sales" as the title parameter, "CS5200 Practicum I" as the subtitle, your name as the author, and the current semester as the date. Write an R code chunk to connect to your database; do NOT load the CSV data. Do not echo any code in the knitted document for this or any of the questions below. 2. (5 pts / 1.5 hrs) Add a level two (##) header with the title "Analysis by Restaurant". Create a SQL query (using either a SQL or an R code block) against your database to find the total number of visits, total number of unique customers, total number of customers in the loyalty program, and total spent on food and alcohol (but not tips) for each restaurant. Display the result in a nicely formatted   table using the kableExtra package. Use appropriate headers for the table. The look of the table is  up to you. The restaurants should be the rows of the tables. 3. (5 pts / 1 hrs) Add a level two (##) header with the title "Analysis by Year". Create a SQL query against your database to find the total revenue (food and alcohol sold, but not tips), average per party spent, and average party size by year (for all restaurants, i.e., you do not have to slice per restaurant). Put the years in the columns. Use code to find the years; do not hard code them, so the document adjusts to new data if re-knitted. Format the result with appropriate table headers using the kableExtra package. 4. (5 pts / 2 hrs) Add a level two (##) header with the title "Trend by Year". Using the dataframe from the prior question, build a line chart that plots year along the x-axis versus total revenue. Adorn the graph with appropriate axis labels, titles, legend, data labels, etc. You should use the standard R  plot() function; you do not need to use packages such as ggplot, ggplot2, or plotly -- although you may, of course. This tutorial (https://www.statmethods.net/graphs/scatterplot.html)may help you get    started. 5. (5 pts / 30 min) Knit the notebook to a PDF. N.B. If you cannot knit to PDF after trying to knit on posit.cloud, then you may submit an HTML document, but will lose 50% of the points for this question. If you knit to HTML, then you need to host your ERD as an image on a server so we can view it. Part H / Add Business Logic 1. (0 pts / 10 min) Create an R Script named "configBusinessLogic.PractI.LastNameF.R" where LastName is your last name and FirstInitial is the first letter of your first name, e.g., "configBusinessLogic.PractI.GilesM.R". Add a header comment containing the name of the program, your name, and current semester. 2. (5 pts / 2 hrs) Create a stored procedure in MySQL (note that if you used SQLite, then you cannot complete this step) that adds a new visit to the database. Name the stored procedure `storeVisit`. The stored procedure should take arguments for the restaurant, customer, date of visit, party size,     food and alcohol bill, and any other information required. You may assume that the server, customer, and restaurant already exist in the appropriate tables (so you can pass their PK values). Show that the stored procedure works. 3. (5 pts / 2 hrs) Create a stored procedure in MySQL (note that if you used SQLite, then you cannot complete this step) that adds a new visit to the database. Name the stored procedure `storeNewVisit`. The stored procedure should take arguments for the restaurant, customer, date of     visit, party size, food and alcohol bill, and any other information required., but you should not assume that the server, customer, and restaurant already exist in the appropriate tables (so you can pass their PK values), but that they may need to be added if they do not already exist. Show that the stored procedure works. Submission Details Before submitting your all R programs, R Notebook, and PDF (HTML), complete the self-evaluation rubric (separate "assignment"; see Canvas). 1. All programs must run from start to end and the notebook must knit to PDF, so be sure to test carefully, load any required libraries, and ensure that the code runs sequentially from start to end. Clean your environment and knit the notebook to ensure there are no out-of-order dependencies and that the notebook will knit for us. 2. Your code has to run, obviously, but it also has to run somewhat efficiently... if everyone else's code   runs in 10-30 minutes but yours takes several hours then clearly is due to poor programming and not due to the inherent complexity of the problem. We expect that you follow common coding strategies   for writing efficient code such as factoring out invariants from loops, not calling functions repeatedly,   pre-allocating memory, not copying objects needlessly, not calling expensive functions when simpler  ones will do (e.g., call substring() instead of doing regular expressions), use which() when searching, use sqldf only when necessary, and so forth. These practices are not specific to R, although there are R specific performance issues, but those are less likely to be a concern here. 3. Create professionally developed code that is well documented, commented, and all chunks in notebooks are labeled.

$25.00 View

[SOLVED] COMP4141 25T1 Homework 4

COMP4141 25T1 Homework 4 March 11, 2025 Task (pass). Give a context free grammar for the language {w#x  :  w ∈ {0, 1}* and x = u · rev(w) · v for some u, v ∈ {0, 1}*} over the alphabet Σ  =  {0, 1, #}.   Here  rev (w) is the reverse  of the word w.   Explain your answer by describing the role that each non-terminal plays in this grammar. Task (pass). Construct a push-down automaton for the language {w#x  :  w ∈ {0, 1}* and x = u · rev(w) · v for some u, v ∈ {0, 1}*} over the alphabet Σ  =  {0, 1, #}.   Here  rev (w) is the reverse  of the word w.   Explain your answer by describing the role that each state of the automaton plays, and the behaviour of the stack as the automaton is processing a word. Task (credit). Suppose that L1 and L2 are regular languages over alphabet Σ. Show that the following language is context free: L = {xy  : x ∈ L1 and y ∈ L2  and |x| = |y|} Hint: use a push-down automaton. Task (distinction). Let L1  be a context-free language and L2  a regular language, both over an alphabet Σ.  Show that the following language is context-free: L = {w  :  w ∈ L1  and w has a substring in L2} (We say u is a substring of w if w = xuy for some words x, y.) Task (hd). Suppose we generalise pushdown automata to machines that have two stacks. Such a generalized machine has transitions of the form. q1, a, s1, s2  → t1, t2, q2 where q1, q2  are in the (finite) set of states of the machine, a is in the input alphabet Σ, or a  = ε, and s1, s2, t1, t2  ∈ Γ ∪ {ε}, where Γ is the stack alphabet. Such a transition means that from state q1, on reading input a, with s1  and s2  at the top of the two stacks, the machine replaces the s1  and s2  at the top of the stacks by t1  and t2, respectively, and moves to state q2 .  Like PDA’s these machines are nondeterministic, so there may be multiple transitions with the same lefthand side q1, a, s1, s2 . Show that such two-stack pushdown automata can accept a language that is not context-free.  (You may use one of the languages shown in lectures to be non-context free.)

$25.00 View

[SOLVED] Principles of Supply Chain Management Assignment 2Haskell

Module Title Principles of Supply Chain Management Assignment Mode Group Project Word Count Limit 2000 words (+/- 10%) Citation Format APA Marks 100 marks Assignment Brief Choose an ice-cream shop as your assignment topic. It can be an existing small business, or a start-up. Refrain from using companies that you have no association with and are located on the Internet. You are to ensure that you know the owner(s) or someone in the organization who can answer any questions posed to them. Your task in this assignment is to evaluate the supply chain of the chosen company, applying the knowledge that you have learnt in this subject, whether the company’s supply chain, as implemented, is achieving its objectives i.e., being responsive or efficient to its customers’ needs. The process of assessing the company’ssupply chain must meet the following requirements: 1. Introduction (300 words) .    Brief background of the company. Explain how the company competes in its industry by looking at the competition, the players in the industry, and the regulations that govern them, customer expectations in the supply chain, and so on, which would justify the approach the company is taking. .    The statements that you make must be supported by evidence from credible and independent references. In addition, base your assessment on evidence obtained from the owner(s) of the company themselves, or publicly available information. 2. The Company’s Inputs (500 words) .    Look at the processes that are involved in the purchase of goods and services for the company, either to meet planned or actual demand. Your emphasis is on how the company selects its suppliers, establishes policies to facilitate these processes, schedules the receipt of the deliveries, and consequently assess the suppliers’ performance. .      Explain how the input processes are set up in the company, and how it supports the company’s stated objective to its customers, using concepts learned in this module, and correlated by observations and interviews from the company. 3. The Company’s Operations (500 words) .      You are then to explore and explain whether the processes that transform the purchased inputs from (2) to a finished product or service, to meet demand. The focus here is to discuss the scheduling of production, the measurement of its performance, and the management of the inventory. Provide evidence to support your views. .      Explain how operations of the company addresses or not, the challenges of balancing supply and demand in the supply chain. Based on the concepts learned in the module, explain the ways that the company could address this primary issue, therefore, satisfying the needs of both the company’s customers and its suppliers. 4. The Company’s Outputs (500 words) .      This concerns the company’s processes that provide the finished goods and services to its customers. Identify and explain the workings of the company’s order management, warehouse management, and transportation management. .      Make a brief assessment as to whether the company and its supply chain is achieving its objective of being responsive or efficient in meeting the needs of its customers. This can be done by evaluating the business performance of the company, to assess whether revenues are increasing, decreasing or staying stagnant. 5. Conclusion (200 words) .      Summarize the key insights. Provide any recommendations and practical implications to the case company. Instructions on Submission 1. Referencing .      All statements of fact or other sources, quoted in the essay, including any diagrams, must have in-text references, with a full reference list provided at the end of the assignment, according to the APA system of referencing. .      You are required to fully reference a MINIMUM of 10 references for submission (e.g., from books; journal articles from the full-text databases; current affairs magazine; newspaper, etc.). The use of WIKIPEDIA online encyclopedia is NOT allowed. 2. Formatting .       Write your name, ID number, module title and word count clearly on the cover page. Your assignment should be A4 word-processed, with a spacing of 1.5 and a font size of 12 Arial. .       Table of content with page numbering .       Word Count (+/- 10%) exclude Cover page, Table of Content, Appropriate tables or illustrations or referencing. 3. Policies .      The penalties for plagiarism and collusion are governed by the Academic Policy of KHEA. The detailed policy information can be found in the Student Handbook. .      The assignment must be submitted online (LMS) on the specific due date. Assignments must be submitted via Turnitin. Any late submission will have marks deducted in accordance with the KHEA’s late submission policy.

$25.00 View

[SOLVED] Moderating Role of Group Member Relationship Quality

PSY Research Topic and Abstract Research Title The Impact of Motivational and Emotional Regulation on Group Task Satisfaction Among University Students the Moderating Role of Group Member Relationship Quality Research Abstract (around 200 words) Abstract:  In educational settings, a group or team is characterized as two or more individuals interacting with each other in one or more sessions to accomplish shared goals (Joo, 2015). With the prevalence of collaborative learning environments, understanding the psychological factors that influence group dynamics has become increasingly important. This study investigates the impact of motivation regulation and emotional regulation on university students' satisfaction with group task, contains key variables such as group member relationship quality. Data will be collected from 200 students at University who are participating in group projects. The aim of this study is to validate and extend current theories, filling gaps in previous studies. It seeks to provide practical insights for improving educational practices and enhancing the group work experience, and improving students' academic performance. Additionally, the study will suggest new directions for future research in this area. Keywords: motivational and emotional regulation; group task satisfaction; group member relationship quality; Number of Variables 5 Motivational Regulation Theoretical definition (Please include references follow APA 7th) In daily life, motivational regulation is a challenge that individuals are usually faced with (Steuer et al., 2019). It involves the process through which individuals activate, maintain, and enhance their motivation to achieve specific goals or sustain task engagement (Eckerlein et al., 2019). Operational definition (How will you measure/manipulate this variable?) Participants can report their general tendency for motivation regulation using the Brief Regulation of Motivation Scale (BRoMS; Kim et al., 2018). Strong validity of the BRoMS within college student populations has already been proved in previous studies (Wolters et al., 2023). The scale consists of 8 items (α = 0.72), with all items rated on a 5-point Likert scale (Kim et al., 2023). Emotional Regulation Theoretical definition Emotional regulation involves both external and internal processes that monitor, assess, and modify emotional responses, particularly their intensity and duration (Hu & Liu, 2017). It serves as a mechanism for adjusting our behavior. to fit the current environment and accomplish goals (Rosales et al., 2013). Operational definition Emotion regulation can be measured by the Emotion Regulation Questionnaire (Gross & John, 2003). The questionnaire consists of 10 items that evaluate emotion regulation strategies. Respondents will be asked to rate on a scale from 1 (strongly disagree) to 7 (strongly agree). A 10-item scale designed to measure respondents’ tendency to regulate their emotions in two ways: (1) Cognitive Reappraisal- 6 items- α=0.79, and (2) Expressive Suppression- 4 items- α=0.73. Test–retest reliability across 3 months was .69 for both scales. Group Task Satisfaction Theoretical definition Group task satisfaction can be referred to the collective equivalent of individual job satisfaction, reflecting the group's overall attitude toward the task and work environment (Mason & Griffin, 2005). This concept arises from the uniformity of individual job satisfaction among group members, shaped by common work conditions, social influence mechanisms, dynamics of attraction-selection-attrition, and emotional contagion that occur within work groups (Mason & Griffin, 2002). Operational definition Group task satisfaction was assessed using a modified version of the Group Task Satisfaction Scale. For each item, participants were asked to rate their level of agreement with the statement "in your team as a whole" on a 7-point scale, where 1 represents "strongly disagree" and 7 represents "strongly agree." The Cronbach's α coefficients for the scale exceeded 0.6, suggesting satisfactory internal consistency. Furthermore, the three-factor model showed a good fit with the data (Mason & Griffin, 2005). Group Member Relationship Quality Theoretical definition Group member relationship quality refers to the overall level of relationships between group members, reflecting the health of their interactions, the degree of trust, mutual assistance, and support, as well as the emotional connection and quality of cooperation between them (Romá et al., 2023). Ural (2009) identified four key components of relationship quality: the sharing of information, the quality of communication, a long-term orientation toward the relationship, and overall satisfaction with the relationship. Operational definition The Group Member Relationship Quality will be measured using the Student-to-Student Relationship Scale from Kim (2021). The questions consist of a five-point Likert scale, with possible response options including "Strongly Agree," "Agree," "Neither Agree nor Disagree," "Disagree," and "Strongly Disagree." These items have been widely adopted in previous studies that used the Add Health (e.g., Kim, 2020; Sutton et al., 2018). Additionally, we will further adjust the items by changing the reference experience from "at school" to "in group work." Main hypothesis/objectives H1a: Motivational regulation positively affects group task satisfaction. H1b: The impact of motivational regulation is more pronounced when the quality of relationships among group members is high. H2a: Emotional regulation has a positive effect on group task satisfaction. H2b: The influence of emotional regulation is more significant when team member relationship quality is high. Literature gap of previous studies Most studies have focused on the factors influencing motivation regulation or emotional regulation (Eckerlein et al., 2019; Vilenskaya, 2020). Few concentrate the roles of them in group work. Additionally, while research has shown the importance of the quality of relationships among team members for task performance (Nasim & Iqbal, 2019), the role of relationship quality as a moderating variable in the mechanisms of motivation and emotional regulation has not been sufficiently explored. More importantly, existing studies tend to concentrate on workplace environments or other specific psychological populations, while research on motivation and emotional regulation in group work among university students is relatively scarce. The unique challenges and dynamics faced by university students in group work require more attention to provide more instructive suggestions for educational practice. ( group work, , group task ) Significance of your study The study provides a comprehensive exploration of the relationships between motivation regulation, emotional regulation, group member relationship quality, and group work satisfaction, aiming to fill the gap in the existing literature and enrich the relevant theoretical framework. Through empirical research, the study will offer evidence on how motivation regulation and emotional regulation influence group work satisfaction, validating and expanding existing theories. The findings will provide practical recommendations for educators and curriculum designers to improve teaching methods for group work, enhance group work satisfaction, and ultimately improve learning outcomes. At the same time, by exploring the impact of group members' relationship quality on motivation and emotional regulation, the study will offer a theoretical basis for improving interpersonal relationships and team dynamics, promoting students' mental health and social adaptability. Finally, this research will provide new insights and directions for future studies, encouraging further in-depth research on teamwork and student psychology. References Dao, P. (2021). Effects of task goal orientation on learner engagement in task performance. IRAL - International Review of Applied Linguistics in Language Teaching, 59(3), 315-334. https://doi.org/10.1515/iral-2018-0188 Eckerlein, N., Roth, A., Engelschalk, T., Steuer, G., Schmitz, B., & Dresel, M. (2019). The role of motivational regulation in exam preparation: Results from a standardized diary study. Frontiers in Psychology, 10, Article 81. https://doi.org/10.3389/fpsyg.2019.00081 Engelschalk, T., Steuer, G., & Dresel, M. (2016). Effectiveness of motivational regulation: Dependence on specific motivational problems. Learning and Individual Differences, 52, 72-78. https://doi.org/10.1016/j.lindif.2016.10.011 Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85(2), 348-362. https://doi.org/10.1037/0022-3514.85.2.348 Hu, Y., Wang, Y., & Liu, A. (2017). The Influence of Mothers' Emotional Expressivity and Class Grouping on Chinese Preschoolers' Emotional Regulation Strategies. Journal of Child and Family Studies, 26(3), 824-832. https://doi.org/10.1007/s10826-016-0606-3 Kim, J. (2020). Gender differences in the educational penalty of delinquent behavior. Evidence from an analysis of siblings. Journal of Quantitative Criminology. Advance online publication. http://dx.doi.org/10.1007/ s10940-020-09450-0 Kim, J. (2021). The quality of social relationships in schools and adult health: Differential effects of student–student versus student–teacher relationships. School Psychology, 36(1), 6-16. https://doi.org/10.1037/spq0000373 Kim, Y. E., Brady, A. C., & Wolters, C. A. (2018). Development and validation of the brief regulation of motivation scale. Learning and Individual Differences, 67, 259-265. Kim, Y. E., Zepeda, C. D., Martin, R. S., & Butler, A. C. (2023). Situating cost perceptions: How general cost and motivational regulation predict specific momentary cost dimensions. Educational Psychology, 43(8), 855-873. https://doi.org/10.1080/01443410.2023.2267806 Mason, C. M., & Griffin, M. A. (2002). Group task satisfaction: Applying the construct of job satisfaction to groups. Small Group Research, 33(3), 271-312. https://doi.org/10.1177/10496402033003001 Mason, C. M., & Griffin, M. A. (2005). Group task satisfaction: The group's shared attitude to its task and work environment. Group & Organization Management, 30(6), 625-652. https://doi.org/10.1177/1059601104269522 Romá, V. G., Hernández, A., Ferreres, A., Zurriaga, R., Yeves, J., & González-Navarro, P. (2023). Linking teacher-student relationship quality and student group performance: A mediation model. Current Psychology, 42(24), 21048-21057. https://doi.org/10.1007/s12144-022-03206-8 Rosales, J.-H., Jaime, K., Ramos, F., & Ramos, M. (2013). An emotional regulation model with memories for virtual agents. In Proceedings of the 2013 12th IEEE International Conference on Cognitive Informatics & Cognitive Computing (ICCI CC 2013) (pp. 260–267). IEEE. https://doi.org/10.1109/ICCI-CC.2013.6618881 Steuer, G., Engelschalk, T., Eckerlein, N., & Dresel, M. (2019). Assessment and relationships of conditional motivational regulation strategy knowledge as an aspect of undergraduates' self-regulated learning competencies. Zeitschrift für Pädagogische Psychologie, 33(2), 95-104. https://doi.org/10.1024/1010-0652/a000237 Sutton, A., Langenkamp, A. G., Muller, C., & Schiller, K. S. (2018). Who gets ahead and who falls behind during the transition to high school? Academic performance at the intersection of race/ethnicity and gender. Social Problems, 65, 154–173. http://dx.doi.org/10.1093/socpro/spx044 Ural, T. (2009). The effects of relationship quality on export performance: a classification of small and medium-sized turkish exporting firms operating in single export-market ventures. European Journal of Marketing, 43(1-2), 139-168. Vilenskaya, G. A. (2020). Emotional regulation: Factors of development and forms of manifestation in behavior. Psikhologicheskii Zhurnal, 41(5), 63-76. https://doi.org/10.31857/S020595920011083-7 Wolters, C. A., Iaconelli, R., Peri, J., Hensley, L. C., & Kim, M. (2023). Improving self-regulated learning and academic engagement: Evaluating a college learning to learn course. Learning and Individual Differences, 103, 102282. https://doi.org/10.1016/j.lindif.2023.102282

$25.00 View

[SOLVED] MATH6017 Coursework 1

MATH6017 Coursework 1 Worth 20% Submission date: 28 March, 2025. 4:00pm Rules • You must work on your own on this assignment with no help from others or GenAI. • You must submit a single Jupyter Notebook file as a submission. • Clearly indicate your name and student number on the code. Save the file as MATH6017 Assignment. • Ensure that your code is clean, well-structured, and error-free. Non-functioning codes will be penalized. Asset Management Project You have been recently hired as a portfolio manager at XYZ Asset Management, a company that provides investment services for institutional  and individual clients.   Your  primary  responsibility  is to construct and manage an optimal investment portfolio that maximizes returns while mitigating risk, and ensuring compliance with the company’s investment policies and client objectives.  As a portfolio manager at XYZ Asset Management, your key duties involve managing a diversified investment portfolio by selecting and optimizing assets from various sectors. You are required to perform portfolio optimization based on different constraints and client preferences. Given the historical prices of ten different assets obtained from YAHOO FINANCE, namely, AAPL, MSFT, AMZN, TSLA, GOOG, META, NVDA, JPM, UNH and XOM from January 1 2022 to December 31, 2022 in stock prices 2022 .csv Task 1 1.   (a)  Calculate the cumulative return for each asset for the entire year.  Present your results in a line graph showing the growth of each asset over time. (b)  Calculate the mean return and covariance of each asset. 2.   (a)  Construct an optimal portfolio that allows short selling using the minimum variance framework. Solve the optimization model using CVXPY module. (b)  Analyze the portfolio weights, expected return and risk of the optimal portfolio. 3.   (a)  Construct  a  mean-variance  portfolio that does not allows short selling such that the portfolio achieves a target expected returns of at least 10% per annum. (b)  Discuss the impact of no short-selling on the portfolio performance. 4.   (a)  Simulate 10,000 possible portfolio for the ten assets. (b)  Compute and plot the efficient frontier for the portfolios.  Identify the portfolio with the lowest risk (tangency portfolio) and calculate the optimal return of the tangency portfolio. Task 2 Prepare a professional report addressed to the company’s investment committee providing a high-level overview of your findings and recommendation from Task 1. Submission Instruction and Grading Submit only one Python file for the assignment.  Task 2 should be included as text using ”Markdown” in the same file. Grading 1.  Clarity of code — 15% 2.  Accuracy and correctness in calculation — 30% 3. Interpretation and financial insight — 25% 4. Visualization and presentation — 15% 5.  Report and code quality — 15%.

$25.00 View

[SOLVED] MATH6017 Practical and Numerical Computation on Financial Portfolio Optimization Using P

Practical and Numerical Computation on Financial Portfolio Optimization Using Python 1 Lesson Objectives By the end of this lesson, students should be able to: •  Understand the fundamentals of financial portfolio optimization. • Implement portfolio optimization techniques using Python. •  Apply numerical methods to compute the minimum variance portfolio (MVP), mean-variance optimization (MVO), and constrained portfolios. •  Use Python libraries such as NumPy,  Pandas,  SciPy,  and  CVXPY for optimization. 2 Introduction to Portfolio Optimization 2.1 What is Portfolio Optimization? Portfolio optimization is the process of selecting the best portfolio (asset alloca- tion) according to an objective, typically maximizing returns while minimizing risk. Investors aim to balance risk and return based on their preferences. 2.2 Key Concepts • Expected  Return (μ):  The weighted  average  of asset returns, repre- senting the expected profit from a portfolio: • Risk (Variance & Standard Deviation): Measures portfolio volatility and uncertainty: • Covariance & Correlation: Measures how assets move relative to each other: • Sharpe Ratio: Return-to-risk ratio used for optimal portfolio selection: • Efficient Frontier:  The set of portfolios that provides the highest ex- pected return for a given level of risk. 2.3    Types of Portfolio Optimization 1. Minimum  Variance Portfolio  (MVP):  Minimizes portfolio risk by choosing weights that result in the lowest possible variance. 2. Mean-Variance Optimization (MVO): Maximizes return for a given risk level, following Markowitz’s modern portfolio theory. 3. Constrained Portfolio Optimization:  Introduces constraints such as no short-selling, maximum investment limits, or risk bounds. 3    Practical Example: Portfolio Optimization with Analysis 3.1    Problem Statement Consider an investor who wants to optimize a portfolio composed of five tech- nology stocks: Apple (AAPL), Google (GOOGL), Microsoft (MSFT), Amazon (AMZN), and Tesla (TSLA). The objective is to construct an efficient portfolio by minimizing risk while achieving a target return. 3.2 Step 1: Data Collection and Preprocessing We first fetch historical stock prices and compute daily returns: import yfinance   as   yf import numpy  as  np import pandas   as  pd stocks  =   [ ’AAPL’ ,   ’GOOGL’ ,   ’MSFT’ ,   ’AMZN’ ,   ’TSLA’ ] data  =  yf . download ( stocks ,   start=’2020−01−01 ’ ,   end=’2023−01−01 ’ ) [ ’Adj – Close ’ ] returns  =  data . pct change ( ) . dropna () mean returns  =  returns . mean() cov matrix  =  returns . cov () 3.3    Step 2:  Minimum Variance Portfolio Optimization Using quadratic programming, we find the portfolio that minimizes risk: def min variance portfolio ( cov matrix ) : num assets  = len ( cov matrix ) w =  cp . Variable ( num assets ) objective  =  cp . Minimize ( cp . quad form. (w,   cov matrix )) constraints  =  [ cp . sum(w)  ==  1 ,  w >=  0] prob  =  cp . Problem( objective ,   constraints ) prob . solve () return w. value mvp weights  =  min variance portfolio ( cov matrix ) print (”Minimum – Variance – Portfolio – Weights : ” ,   mvp weights ) 3.4 Step 3: Portfolio Performance Analysis To evaluate the portfolio, we compute the expected return and risk: def portfolio performance ( weights ,   mean returns ,   cov matrix ) : port return  =  np . dot ( weights ,   mean returns ) port volatility  =  np . sqrt (np . dot ( weights .T,  np . dot ( cov matrix ,   weights ))) return port return ,   port volatility mvp return ,   mvp vol  =  portfolio performance ( mvp weights ,   mean returns , cov matrix ) print ( f ”MVP – Expected – Return : – {mvp return : . 4 f } , –MVP – Risk : – {mvp vol : . 4 f }”) 3.5    Analysis and Interpretation From the results: •  The Minimum Variance Portfolio (MVP) provides the lowest risk but may not offer the highest return. •  The Efficient Frontier shows a range of optimal portfolios balancing risk and return. • Investors can select portfolios based on their risk tolerance. 4    Working with a fixed dataset Today, we will work with a fixed dataset and roundly generated data due to the rate limit on YahooFinance. Consider that you are working for a firm that manages 5 assets given in the dataset asset returns .xlsx. 5 Conclusion •  The efficient frontier provides valuable insights into optimal asset allo- cation. •  Portfolio optimization techniques can be extended to factor investing, risk parity, and machine learning-based asset selection. •  Future work could involve dynamic  rebalancing  and  robust  opti- mization models to handle market changes. 1. Importing Libraries The following Python libraries are imported: • NumPy: For numerical operations such as matrix multiplication. • Pandas: For handling datasets. • Matplotlib: For plotting the efficient frontier. • SciPy: For portfolio optimization using the minimize function. Listing 1: Importing Required Libraries import numpy  as  np import pandas   as  pd import matplotlib . pyplot   as   plt from scipy . optimize import minimize — 2. Loading Dataset The asset returns dataset is loaded from an Excel file: Listing 2: Loading Asset Returns Dataset file path  =  ”/mnt/data/ asset returns . xlsx ” returns df  =  pd . read excel ( file path ) — 3. Calculating Key Metrics We calculate the mean returns and the covariance matrix: mean returns  =  returns df . mean() cov matrix  =  returns df . cov () num assets  =  len ( mean returns ) — 4. Defining Portfolio Variance Function def   portfolio variance ( weights ,   cov matrix ) : return  np . dot ( weights .T,   np . dot ( cov matrix ,   weights )) — 5. Defining Constraints and Bounds The optimization problem has the following constraints: • The sum of portfolio weights must equal 1: •  Each weight must be between 0 and 1 (no short selling): constraints  =  ({ ’ type ’ :    ’eq ’ ,    ’ fun ’ :   lambda  weights :   np . sum( weights )  −  1}) bounds  =  tuple ((0 ,   1)   for   asset   in   range ( num assets )) init guess  =  np . ones ( num assets )   /  num assets — 6. Portfolio Optimization The optimization problem minimizes the portfolio variance using the Sequential Least Squares Programming (SLSQP) method: subject to: opt results  =  minimize ( portfolio variance  ,   init guess  ,   args=(cov matrix ,) , method=’SLSQP’ ,   bounds=bounds ,   constraints=constraints ) 7. Extracting Optimization Results The optimal weights, return, and risk of the minimum variance portfolio are extracted: min var weights  =  opt results . x min var return  =  np . dot ( min var weights ,   mean returns ) min var risk  =  np . sqrt ( opt results . fun ) 8. Efficient Frontier Simulation We generate 5000 random portfolios to plot the efficient frontier. For each portfolio: • Portfolio return: • Portfolio risk (standard deviation): • Sharpe ratio: num portfolios  =  5000 results  =  np . zeros ((3 ,   num portfolios )) for   i   in   range ( num portfolios ) : weights  =  np . random . random( num assets ) weights  /=  np . sum( weights ) port return  =  np . dot ( weights ,   mean returns ) port risk  =  np . sqrt (np . dot ( weights .T,   np . dot ( cov matrix ,   weights ))) results [0 ,   i ]  =  port risk results [1 ,   i ]  =  port return results [2 ,   i ]  =  port return   /   port risk — 9. Plotting the Efficient Frontier We visualize the efficient frontier along with the minimum variance portfolio. plt . figure ( fig size =(10,   6)) plt . scatter ( results [0 ,   : ] ,   results [1 ,   : ] ,   c=results [2 ,   : ] ,   cmap=’ viridis  ’ , alpha =0.7) plt . colorbar ( label=”Sharpe   Ratio ”) plt . scatter ( min var risk ,   min var return ,   color =’red ’ ,   marker= ’ * ’ , s=200,   label=”Minimum  Variance   Portfolio ”) plt . title (” Efficient   Frontier ”) plt . xlabel (” Risk   ( Standard   Deviation )”) plt . ylabel (” Return ”) plt . legend () plt . grid () plt . show () 10. Displaying Results Finally, the optimal portfolio details are displayed: Listing 3: Displaying Results print (”Minimum– Variance – Portfolio : ”) print ( f ”Risk : – { min var risk : . 4 f }”) print ( f ”Return : – {min var return : . 4 f }”) print ( f ”Weights : – {min var weights}”) Conclusion This approach identifies the optimal portfolio that minimizes risk while main- taining the desired level of return.  The efficient frontier represents the set of portfolios that offer the best possible return for a given level of risk.

$25.00 View

[SOLVED] BUSI1067 Computer s in Business 2024-2025

Module Code BUSI1067 Module Title Computers in Business Academic Year 2024-2025 Semester Autumn Credits 10 Level of Study Level 1 Summary of Content This module will introduce the use of computers and IT in business today and in particular spreadsheet modeling via a lab-based assessment and a case study group report and presentation. Education Aims < To provide participants with a solid understanding of the application and impact of computers and the Internet, especially in small businesses. < To ensure a hands -on competence in the use of spreadsheets. Learning Objectives and Outcomes Knowledge and understanding This module develops a knowledge and understanding of: < The development, management and exploitation of information systems and their impact upon organizations. < The comprehensive use of relevant communication and information technologies for application in business and management. < The importance of sustainability issues, including an understanding of the challenges and opportunities arising from the activities of people and organisations on the economic, social and environmental conditions of the future. Intellectual skills This module develops: < The ability to create, evaluate and access a range of options, together with the capacity to apply ideas and knowledge to a range of business and other situations. Professional practical skills This module develops: < The effective use of communication and information technology (CIT) skills for business applications. < The ability to conduct research into business and management issues, either individually or as part of a team, including a familiarity with a range of business data, research resources and appropriate methodologies. Transferable (key) skills This module develops: < Effective oral and written communication skills in a range of traditional and electronic media. A ssessment Details GROUP PROJECT Contribution to the Module: 50% Deadline for submission: 28 November 2024, Thursday, by 3.00 pm Word count: 2500-3000 words (excluding cover page, references) Task: 1. Form. a group of 6 VWXGHQWV.DQG.FRPSOHWH.WKH.³Group Members Sign -up Sheet´.RQ.0RRGOH. 2. Introduce an existing information system/cloud platform. capable of / designed to revolutionise (change) business processes with a well -defined digital transformation initiative (see the example below) . 3. Based on the identified information system, discuss the following: < What is the digital transformation initiative raised by this information system? (related to lecture 1) < How does this information system support the process of digital transformation? (related to lecture 2) < What values or opportunities are created by this information system in the business context? (related to lecture 3) < What are the challenges when a company implements this information system? (related to lecture 4) < How this information system can be related to sustainable uses or practices? (related to lecture 5) 4. Each of these discussions should not exceed 400 words and be supported by 3-5 references. 5. Complete the peer review form. (each student) and w rite a reflection (group) with not exceed ing 300 words, to reflect on the work progress, focusing on < WHAT (i.e. the task s) < SO, WHAT (i.e. the effect made, challenges faced, action taken) < THEN , WHAT (i.e. the current outcomes, future improvement) 6. Prepare a 15-minute presentation (10-minute talking and 5-minute Q&A). Each group is allowed to decide on HDFK.PHPEHU¶V.UROH.DQG.FKRRVH.D.VXLWDEOH.presentation timeslot (from the Presentation Sign-up Sheet on Moodle). 7. Complete the written report in the following format: < Font: Verdana < Font size: 11 point s < Spacing: 1.5 spaced < Referencing: Harvard citation style. < File naming convention: CIB_ group number (i.e., CIB_ Group1 ) 8. Schedules for the tasks: < Semester w eek 6, 25 Oct 2024, Friday, by 3 pm - Complete the group members sign-up sheet on Moodle (after this, students who have not signed up will be assigned to a group with fewer than six members ). < Semester week 6, 25 Oct 2024, Friday, by 3 pm - Complete the presentation timeslot sign-up sheet on Moodle (please choose a timeslot that all members can attend). < Semester week 12 ±Group p resentation week Please take note of the following: < Technical difficulties encountered by students very close to the deadline time will not be accepted as a reason for approving ECs for lateness. < Only Coursework submitted on Moodle will be marked. One submission ONLY per group. < A deduction of 5% off the mark achieved shall be imposed upon the expiry of the deadline and an additional 5% per subsequent 24 -hour period until the mark reaches zero. Except in exceptional circumstances, late submission penalties will apply unless a claim for extenuating circumstances is made before the deadline. < Each group member will receive the same mark for th is coursework and hence, is expected to contribute a good effort to his/her group assignment. Students are encouraged to resolve any problems within their group before approaching the module convenor. When an unequal mark allocation is requested , this must be justified by the students and supported by appropriate evidence. No case for an unequal allocation of marks from any group member can be considered once coursework has been marked. INDIVIDUAL ASSIGNMENT Contribution to the Module: 50% Deadline for submission: 12 December 2024 (Thursday), by 3.00 pm Tasks : Create an Excel workbook with at le ast 2 worksheets ( must include Info worksheet and Data Model worksheet) Requirements: 1. Watch all self-paced video tutorial s by 1 November 2024, 5 pm, Friday (semester week 7). 2. Complete all in -video quizzes with correct answers (multiple attempts are allowed) by 1 November 2024, 5 pm, Friday (semester week 7). 3. Create an Info worksheet to display your full name , student IDs , and the number of the chosen scenario. 4. :ULWH.D.³OHDUQLQJ.ORJ´.'QRW.PRUH.WKDQZRUGV.WR. explain the uses /effects of Excel functions and features and present it on the Info worksheet. 5. Create a Model worksheet , choose ONE of the scenarios below , and build a data model with the skills you learn from the lab lessons or video tutorials. 1) A fashion store needs to manage its inventory levels to avoid stockouts and overstock situations . An Excel data model is built to collect the projected numbers for each category of products given by the business owner, then calculate the reorder amounts and use conditional formatting to highlight the items that need to be reordered. 2) An event organising company needs to prepare its annual budget and track expenses . An Excel data model is built for managers to insert the expected expenses for different departments and the finance officers to insert the actual expenses so that the comparison can be made between the actual expenses and the budgeted amounts, and a dashboard can be made to visuali se budget variances and highlight areas of concern. 3) A manufacturing company wants to monitor and reduce its energy consumption . An Excel data model is built to collect data on energy usage across different devices or machines , then identify peak usage times and areas with the highest energy consumption and use what-if analysis to propose various strategies for energy saving. 4) A shoe company plans to launch a new product and wants to analy se the potential market impact . An Excel data model is built to collect data on market size, competitor products, and pricing strategies, then use scenario analysis to build a model to forecast sales for the new product. 5) A wooden furniture company wants to analyse the profitability of its different product lines . An Excel data model is built to collect data on product sales, costs, and margins, then use the Pivot tables and Pivot charts to analyse profitability trends and make recommendations for improving the margins. 6. Design the structure and the appearance of both worksheets. 7. Submit the Excel file in .xlsx format (no other format is allowed) with the student ID number as the file name via the submission link on Moodle . Please take note of the following: < Technical difficulties encountered by students very close to the deadline time will not be accepted as a reason for approving ECs for lateness. < Only Coursework submitted on Moodle will be marked. < A deduction of 5% off the mark achieved shall be imposed upon the expiry of the deadline and an additional 5% per subsequent 24 -hour period until the mark reaches zero. Except in exceptional circumstances, late submission penalties will apply unless a claim for extenuating circumstances is made before the deadline.

$25.00 View

[SOLVED] Principles of Logistics Management Assessment 3 Statistics

Module Title Principles of Logistics Management Assignment Mode Individual assignment Word Count Limit Word limit, if any, 800 words (+/- 10%) Citation Format APA Marks 30 marks Assignment Brief Write a reflective report to critically analyze your experiences on the recent group project. You will evaluate the process, your individual contributions within the group and personal growth as group member. In addition, your reflection should discuss how you applied the theories/ principles learned in this module, and the skills developed during the project with proposed recommendations. 1.   Introduction (100) Briefly describe the group project, including its aims, objectives, and key deliverables. Outline the purpose of your reflective report and what will be covered. 2.   Group Dynamics and Individual Contribution (350) Reflect on how your group worked together as a team. Analyze the group's dynamics, including communication styles, decision-making processes, and conflict resolution strategies. Identify factors influencing group performance. Describe your role within the group and your specific responsibilities. Evaluate your individual performance in terms of meeting deadlines, completing tasks, and contributing to group discussions. Reflect on your strengths and weaknesses as a group member. 3.   Personal Reflections and Insights  (350) Discuss how you applied the theories and principles learned in this module in your group project. Identify any lessons learned from the project experience (e.g., self-awareness, professional development, adaptability, communication). Reflect on your personal growth throughout the project and propose recommendations for improving your future approach towards group work or professional situations. Instructions on Submission 1.   Referencing •   All statements of factor other sources, quoted in the essay, including any diagrams, must have in-text references, with a full reference list provided at the end of the assignment, according to the APA 7 system of referencing. •   You are required to fully reference a MINIMUM of 5 references for individual reflections submission (e.g., from books; journal articles from the full-text databases; current affairs magazine; newspaper, etc.). The use of WIKIPEDIA online encyclopedia is NOT allowed. 2.   Formatting •    Write your name, ID number, module title and word count clearly on the cover page. Your assignment should be A4 word-processed, with a spacing of 1.5 and a font size of 12 Arial. •    Table of content with page numbering •    Word Count (+/- 10%) exclude Cover page, Table of Content, Appropriate tables or illustrations or referencing. 3.  Policies 4.  The penalties for plagiarism and collusion are governed by the Academic Policy of KHEA. The detailed policy information can be found in the Student Handbook. 5.  The assignment must be submitted online (LMS) on the specific due date. Assignments must be submitted via Turnitin. Any late submission will have marks deducted in accordance with the KHEA’s late submission policy.    

$25.00 View

[SOLVED] AI6131 Project Description

AI6131 Project Description Overview As part of the 3D Deep Learning course, students are required to complete a project that allows them to explore a topic of their choice related to 3D deep learning. This project is designed to encourage creativity, independent research, and hands-on implementation of 3D deep learning techniques. Students may select any topic within the broad field of 3D deep learning and encouraged to propose novel ideas, apply existing techniques to new problems, or enhance prior work. Assessment will be based on their ability to identify and address challenges, present their findings eƯ ectively, and document their results in a structured manner, reflecting the review process of a top-tier conference submission but with less stringent criteria. Assessment Components The project consists of three key deliverables: 1. Project Proposal (Due: 19 March)  Length: 1-page document  The proposal should outline: 1) The problem being addressed. 2) The motivation for the project. 3) Key challenges involved. 4) The approach and plan for implementation.  The instructor will review proposals and provide feedback. 2. Project Presentation (16 & 17 April)  Duration: 10-minute presentation per student.  The presentation should cover: 1) The problem and motivation. 2) The methodology used. 3) Preliminary findings and progress. 4) Any challenges encountered and potential solutions.  This session provides an opportunity for direct interaction with the instructor, allowing for feedback and discussion. 3. Final Report (Due: 30 April)  Length: 4-6 pages, structured like a short research paper.  Suggested format:  Title & Abstract: A concise summary of the work  Introduction: Problem statement, motivation, and background  Related Work: Overview of relevant prior research  Methodology: Explanation of the approach and techniques used  Results: Description of experiments, evaluation metrics, and findings  Discussions: Interpretation of results, and possible improvements  Conclusion & Future Work: Summary and potential extensions  Like submitting a paper to a top-tier conference, an optional video demonstration is encouraged. Additional Notes  Collaboration is not allowed; each student must complete their own project.  Students may use open-source code and publicly available datasets but must clearly acknowledge them in the report. Plagiarism in any form. will not be tolerated.  Students may use Large Language Models (LLMs) to refine the writing in their report. However, they must provide their own insights and findings. Directly generating the report using LLMs is strictly prohibited.  Students are encouraged to seek guidance from the instructor during oƯ ice hours if needed. This project is an opportunity for students to deepen their understanding of 3D deep learning and gain hands-on experience in conducting research and experimentation. I look forward to seeing your ideas and results!

$25.00 View

[SOLVED] C31DE Derivatives

C31DE Derivatives Group Project: Option Portfolio Position Analysis (30% of total course mark) Project Pre-requisites Prior to starting this project you should: 1. Watch these videos: i) Edinburgh Coursework Video 1 - The Greeks ii) Edinburgh Coursework Video 2_CW Instructions explained and a demo In conjunction with these slides: The Greeks - Slides from Hull.pdf {Note: at minute 28.04 into Video 1 I started talking about graphs but the screen did not share them. They are the graphs in slides 35 and 36 of the lecture slides on options. So please also have them opened when watching that part of the video} 2. Read the following chapters of Natenberg’s book (also in the list of reading for topics): i) Chapter 6.pdf ii) Chapter 8.pdf iii) Chapter 17.pdf Project Final Submission 1. Deadline is 11am Friday of Week 9 (14 March 2025) 2. As a group you need to compile a Word Report and an Excel sheet. Appoint one member of the group to make the final submission on behalf of the group. Only THIS PERSON should submit the following two files before the deadline: i) A Word (.doc or .docx) file of the project report. This is a 2000-word (give or take 100) report of your group’s response to the main requirements. Submit this file to the assignment dedicated to Word files. The project report should have a cover page that lists the following: a) identifies the assignment as C31DE Coursework Project Report, b) your coursework group number (Group X#), c) the student ID of all group members and d) the word count of the report excluding figures and tables (no external references needed to be listed unless you use them). Submit this file using the following file naming convention: “C31DE_CW_Group #_Word.doc” replacing # with your group number. ii) An Excel (.xls or .xlsx) file of the project calculations. This file is your group’s calculations that generated the graphs. Submit this file to the assignment dedicated to Excel files. This file verifies your calculations and is assessed in the final criteria of the marking rubric, but do not refer to this file in your Word report. Your Word report should stand alone and not require the reader to refer to the Excel file to follow your textual arguments or narrative presentation of the analysis. It should contain a copy of all the graphs and their interpetations and all what you want to be assessed on. Use the following convention to name this file: “C31DE_CW_Group #_Excel.xls” replacing # with your group number. Some Tips on Working in Groups Here is a general guide to effective group work that you might find useful: guide-to-effective-group-work.doc Specifically: 1. You are expected to actively work within your group BUT NOT across groups, and you are not allowed to interact with anyone outside your group for this coursework. 2. As a group, you are expected to meet (e.g., face-to-face or virtually through a platform. of your choice, such as Teams), assign work as early as possible across members of the group so that all group members are clear about their responsibilities, and meet regularly according to a pre-planned schedule to gauge progress and discuss issues. 3. You are expected to work as a team, be professional in your interactions within the team, attend all scheduled group meetings, communicate effectively and regularly, and be clear about your responsibility within the team and the responsibility of other team members. Keep your team informed of all situations, including Mitigating Circumstances if any. If you expect a delay in attendance of group meetings or unable to progress, then communicate this status to your team without delay. However, if the hinderances you face are due to mitigating circumstances then do not reveal any personal information and just inform. the team that you will be "submitting a Mitigating Circumstances form" to explain your delay or lack of progress, and let the team know in good time to decide on how to rearrange the work and the responsibilities of each member. You need to do this as soon as you expect any Mitigating Circumstances that may affect the group work. 4. The person you assign to submit your group coursework files should be chosen by the group at the outset and is designated as the 'Group Representative'. He or she is NOT expected to take on more work than the other members of the group, except for helping ensure group communication (e.g., emails) reaches every member of the group, and for submitting the final documents required for the assignment(s). ONLY the Group Rep should submit the assignments, but other members should check that this person has submitted before the deadline. In other words, submission before the deadline is the responsibility of all members of the group, but the submission itself should be done by the Group Rep only. If the group rep does not submit ten minutes or so prior to the deadline, other members should take quick action and make the submission before the deadline to avoid the late submission penalty. 5. The hope is that you practice and hone your teamwork skills and contribute effectively, equally, and efficiently as an individual within a team. All members of a group will receive the same group mark if they work effectively with each other without issues. However, should you face continuing difficulties in working with other members of your group (e.g., lack of contribution by others, persistent 'free rider' problem, mitigating circumstances, ... etc) then you have the option to fill in and submit the ‘Peer Assessment Form' in which you outline and document these difficulties. Note that these difficulties are different from mere 'clashes of personalities' which should be minimised or eliminated by having been provided from the outset the flexibility to choose your team members and the group that you sign up to. This form. is available through the link in the instructions above. The described circumstances will be assessed together with information gathered from your other team members and considered in giving you a 'fair' mark taking into account the circumstances and shortfall in teamworking skills. This is the exception, and individuals are expected to conduct themselves professionally as equal contributing members of a group. Project Requirements This project requires a fair amount of familiarity with Excel functions and macros, or any similar spreadsheet or software package. A table below entitled ‘Option Positions’ describes the contents of a number of portfolios of options (spreads) and their underlying stock. You are required to fully analyse the position that relates to your group number along the lines of chapter 17 of Natenberg. Approach First read chapters 6, 8 and 17 of Natenberg and the 'The Greek – Slides' (or watch the video). Second, refer to attached table below: 'Option Positions' for required data and the portfolio position that your group should analyse. Notes Note that implied volatility of each option is given in the table below, from which you can calculate the actual market prices of the options (by using the implied volatility as the input for sigma in the Black/Scholes). This table replaces Figure 17-5 of Natenberg's handout ‘17 Position Analysis’ for the required data but should be compared to it. You will not need to extract any data from Figure 17-5 for use in analysing your own complex position. Project The 'analysis' should be in the form. of a full discussion guided by similar discussions presented in the handouts, particularly, but not exclusively, those in ‘Chapter 17: Position Analysis’.  The following calculations should be included and considered as the minimum of tools that you should use to support arguments and discussions of the analysis. They constitute part of the requirement but should be strictly looked at as mere tools, and non-exhaustive at that, to be used in supporting your discussion and analyses. Calculations of these ‘tools’ would attract a small proportion of the marks relative to the discussion requirement. 1. The calculation of the theoretical edge of the position (refer to the bottom of page 353 of Natenberg, which is the first page of Chapter 17). 2. Generate graphs of the theoretical edge of the position and its sensitivities. For this you will need to make extensive and accurate use of Excel to calculate the Greeks for your position, as Natenberg does in chapter 17. Please be aware of, and read below about, Natenberg's convention in expressing the Greeks (multiplies by 100, or divides by 365, …, etc). The equations you require for calculating the Greeks are all in the book by Hull and listed in the table of the Greeks in the slide on the Greeks. Here they are again: 3. Use the graphs to analyse the sensitivity of the theoretical edge of the position to percentage changes in the underlying determinants such as price, volatility, time and interest rates. You should also analyse how these sensitivity measures (Greeks) change with the underlying asset price or with other sensitivity measures. (Here note that Natenberg graphs the 'theoretical edge' rather than the profit or loss, which strictly speaking constitute the intrinsic value at expiration or on immediate exercise only. However, he still vaguely refers to the ‘theoretical edge’ as the 'theoretical profit and loss'. You, however, should be clear of this distinction, but still use the theoretical edge not the profit/loss at expiry (i.e., do not graph profits and loss as defined by intrinsic values of the options, which are values at expiry only.) 4. Fully discuss the aim(s) or objective(s) of the strategy, risks faced by holders of the strategy, and considerations in the mind of the trader as they hold (or monitor) the position till maturity. This point and the following two are the primary requirements of the coursework. 5. Provide a full discussion of hedging aspects of this position that should, at the least, make use of answers to the following questions. You should include a discussion of any trade-offs the trader has to balance. a) What position in the underlying would make the overall portfolio delta-neutral? b) A traded option is available with ∆=55, Γ=1.4 and Λ= 0.75. (Note that some 'Greeks' are multiplied by 100 and others not. This is simply a convention. Be consistent in your calculations). What positions should the trader take in this traded option and the underlying asset to make the portfolio both delta- and gamma-neutral? c) What positions should the trader take to make the portfolio both delta- and vega-neutral? 6. Discuss any other aspect you find important to the institution from holding these portfolios (e.g., the effect of transaction costs, or possible impact on other liabilities) Things to watch out for 1. Natenberg uses futures as the underlying assets to all options. He prices the futures using the cost of carry model: F = S.ert. Example If F = 100 in the primary position, r = 6% and t= 6 weeks, then S = F.e-rt = 100.e-0.06(6/52) = 99.31008 and this is taken as the price of the underlying asset of the option. Then using the Black-Scholes with S = 99.31, r= 6%, t= 6/52, E = 100 for March 100 Call options, and s= 20% gives C = 2.69 as the theoretical premium of the call option. This is how the numbers reported in Table 8-20 are calculated (double check it). 2. Natenberg uses 100 as the base number for most Greeks. He basically uses the convention of multiplying option deltas and gammas by 100, he divides option Vegas by 100, and divides the theta by 365 to express it as the rate of time decay per day. For example, given his convention a delta of 100 would be equivalent to holding one share and not 100 shares. This can be confusing when calculating number of options or stocks you need to buy or sell in hedging. Hopefully, however, this warning about this ‘scaling’ convention eliminates this possible source of confusion. 3. Natenberg mistakenly assumes that the downside limit of profit or loss from holding options is infinity. For example, he assumes that the maximum possible profit (loss) one can gain from buying (selling) a put option is infinite, while you obviously know that it is limited to the present value of the exercise price plus or minus the option premium. I presume this misconception is due to the fact that traders often have positions in the underlying assets alongside their derivative portfolios.

$25.00 View