Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] A1 Customer Problem Identification

A1. Customer Problem Identification- Individual Report [CLO1, CLO2, CLO5] Length: Max 900 words Description This assessment provides the opportunity to identify customer problems from product review data and generate new product or service ideas. After customers purchase and use products or services, they share their experience on online product review platforms. Many companies try to identify customer problems and their unmet needs from large- scale product review data by using natural language processing such as topic modelling and sentiment analysis. While the recent advancementsin AI methods allow us to categorize textdata automatically by doing topic modelling and labelling, these AI methods are not perfect. Thus, your task is to identify customer problems using both an AI machine and manual labelling by humans to generate new product or service ideas. Details Task 1: Product Review Categorization 1.1. By AI: The file A1 Intent.csv contains “recommended intents generated by IBM AI” (See A1 Data Description of AI Intent.docx). Note that we gave 1000 sentences to IBM AI but some irrelevant sentences were dropped by IBM AI. (a) What is the number of intents generated by the AI Machine? (b) What is the number of review sentences selected by the AI machine for each intent and across all the intents? 1.2.By human: To address the ambiguity in some intents generated by IBM AI and an excessively large number of intents relative to the limited review sentences, please do human categorization further. In other words, you can change some intents to different categories. Candidate groups are product (e.g., skincare), product attributes (e.g., longevity), different stages of customer journeys (e.g., online shopping, browsing, delivery), and soon. (c) How many intents (by AI) did you change to different categories? (d) What is the number of the final categories labelled by a human (you)? Ensure that it does not exceed 30 categories. (e) During human categorization, why did you change the initial categories the AI machine had generated? In other words, what kind of categories did you try to make? And why? Task 2: Importance-Satisfaction Plot Using the final categories generated from Task 1 and the Python code covered in the lecture and tutorial, please count the intent frequencies, measure the intent sentiment and make an Importance-Satisfaction plot. Then, please interpretate the plot by dividing the plot into four quadrants: Right-Bottom, Left-Bottom, Right-Top, and Left-Top. (a) What does each quadrant mean? (b) What categories are located in each quadrant? Task 3: Customer Problem Identification & New Product or Service Idea Generation Based on the importance-Satisfaction plot, please make recommendations for the company. (a) what are the company doing relatively well? (b) what are the primary customer problems? (c) What does the company need to improve with high and low priority? (d) To address the identified customer problems, suggest new product or service ideas. When completing the tasks above, please apply appropriate data analytics practices and integrate key concepts introduced in class. Ensure that your discussion is logical, clearly structured, and professionally presented. Your report should not exceed the word limit, excluding the title page, relevant images, tables or charts. Title page (1 page) includes (1) the Title of your report, (2) the Word count, (3) the Course name, tutorial session and group, tutor’s name, (4) Your first and last name & zID. Submission instructions A.   Submit your report to Turnitin via Moodle. -  .doc contains your report. Filename: “Tutorial session_Group_ your first and lastname & zID _A1.doc” (e.g., T9 1 Junbum Kwon_zXXXXX_A1.doc) B.    Submit other supporting files (data, and code) to Moodle submission folder. 1)    .xlsx file contains data including Human categorization columns which you added. 2)    .ipynb contains all relevant code to get the results in your report. ●   For each missing file among the above (1) to (2), -1 mark Marking Criteria Your assignment will be marked based on the following marking criteria: 1.    Analysis: Quality of analysis - categorization and plotting 2.    Interpretation & Recommendations: Quality of interpretation and new product idea 3.    Written Presentation: Quality of written report For further information, see the below marking rubric.

$25.00 View

[SOLVED] MSc Renewable Energy Systems Integration RESI - 2025 Assignment-2

MSc Renewable Energy Systems Integration (RESI) - 2025 Assignment-2 Instructions Marks: 25% of total mark of the module. Due date and time: on Thursday 10th  April 2025 at 2 pm Each  student  is  required  to  submit  an  individual  and  formal  report.  The  problem  has  no  unique approach/ solution and therefore the methods/ solutions are expected to be varying from one student to another. Students may make best possible assumptions if any extra information  is required, however, they should be justified in  a  Micro Grid and distributed generation  integration  context,  giving relevant justifications and appropriate references. Submission will be via Canvas, please familiarise yourself with Canvas before submissions are due. Files are to be smaller than 20MB to be able to submit to the Canvas. 1000 word-limit report must not be more than  10 pages, excluding the cover page and Appendices. Minimum font size of the body of the report should be  11. Font size of the captions of figures and tables must be 10. The file type of the report must be PDF. Please use IEEE referencing style. Your assignment submission report must follow the task numbering format of this assignment and marks will be given only if the report is prepared following the task numbers given in the assignment. Software based calculations can only be used to verify the accuracy of hand calculations. Body of the report should  be  presented  with   sample  calculations  and  any  repetitive  calculations  can  be  placed  in Appendices. Late submissions will be penalised by deducting 5% marks per day late. Assignments will not be accepted more than 20 working-days late after the submission deadline. Generative AI You must not use the output of Generative AI (i.e., the content it creates) in this assessment. It is a breach of good academic  practice if you submit work generated by Generative AI tools as your own, or incorporate them into your own work in this assignment. If concerns are raised about your work then you may need to participate in a viva (oral examination) of your work. Assignment details Figure 1 show a schematic diagram of three wind farms in three Micro Grids (Micro Grid –A, Micro grid –B, and Micro Grid –C that are planned to integrate with a utility power grid via power transformers (T1 , T2, T3 and T4). Wind farms in Micro Grid  A and B are flexible to install up to twelve and eight fixed-speed wind-turbine induction generators respectively. Wind Farm in Micro Grid C has no restriction on installing any number of fixed-speed wind-turbine generators however; the generated power should be within the safe and secure operating limits of relevant network assets. All wind turbine generators are operated at 50 Hz and at 690 V. The local load of Micro Grid –A and Micro Grid –B are shown in Figure 1 as Local Load-A and Local Load – B. Table  1  gives  the  parameters  of wind  turbine  generators  referred  to  the  stator  in Ω /phase in all three Micro Grids corresponding to the wind farm site. Table 1 ce3.088353.966004.14275Wind Farm number123Slip-0.01523-0.02274-0.02558FeedersizeidentifierR(Ω/km)X(Ω /km)Capacity(kVA)Price(£/MVA/km)S 20.190.24120325,000S 40.060.09200325,000S Considering that you are the Design Engineer in this Micro Grid project, proposing an effective duration to operate the project (justified with a reference), and applying a convergence error tolerance of 10-5  in your iterative calculations: Part (1) (i) Calculate number of wind turbine generators proposed to install at each wind farm site in Micro Grid –A, Micro Grid – B, and Micro Grid -C. (ii) Calculate power generated and consumed by wind turbine generators at Micro Grids at full load and no load operation. (iii) Calculate and list the appropriate feeder sizes to suit the system conditions given in the Micro Grid project. (iv) Calculate voltage at each Bus in Micro grid project at full load and no load operating conditions. (v) Determine the installation locations (buses) of capacitor banks and calculate the required capacitance values at respective locations to reduce the no load current of all induction generators to zero in the Micro grid project. (vi) Calculate the voltage at Buses in the Micro Grid project with the calculated capacitor bank in (v) (vii) Calculate the full load active and reactive power losses of all branches in the Micro grid system with capacitor banks calculated in (v) and without. [60 Marks] Part (2) Calculate Life Cycle Cost (LCC) of the Micro Grid project (system connecting up to the utility grid) and justify the technical and economic feasibility of the design given in Figure 1. [20 Marks] Part (3) Present a formal report covering Part (1) to (2) sections, presenting the engineering judgements you made, a discussion, conclusion, and references (marks are given to five key references). The arguments, discussions, and conclusions must be made by referring to the given case of the assignment. No marks will be given if students just reproduce conclusions, discussions, or justifications that are commonly available in published literature. [20 Marks] Students  are  allowed  make  reasonable  and  realistic  assumptions;  however,  they should be technically feasible and economically justified. Students may use online (or published) technical data in addition to the data given in the assignment; however, the sources of information should be given as references with appropriate citations. The marker will only mark what is in the body of the report and not the contents in the appendices. Long tables of data such as Excel tables should be placed in Appendices. Sample  calculations  must  be  provided  in  the  body  of the  report  in  all  repetitive calculations. Figure 1: Three wind farms in three Micro Grids connecting a utility power grid. l = length of the feeder. A double circuit line is connected between Bus 6 and Bus 7.

$25.00 View

[SOLVED] ECON2131/6034 Public Sector Economics Tutorial 3 Welfare economics Efficiency and equityHaskel

ECON2131/6034 Public Sector Economics Tutorial 3 Welfare economics: Efficiency and equity 1.  Discuss. For each of the following policy changes, explain why the change is or is not likely to be a Pareto improvement: (a)  Protecting the automobile industry from cheap foreign imports by imposing quotas on the importation of foreign cars. (b)  Increasing social security benefits, financed by an increase in the payroll tax. (c)  Replacing the primary reliance at the local level on the property tax with state revenues obtained from an income tax. 2.   True or false. “Utilitarianism implies that a dollar given to one person is as important as a dollar given to anyone else.” Justify your answer. 3.   Consider an individual with income I who consumes two goods x andy, with prices px and py respectively. If the individual has Cobb-Douglas preferences, then the indirect utility function and the expenditure function are given by:   where U  stands for a given level of utility. Assume that initially I=16, px  =1, and py =4. Use the information provided to compute the compensating variation (CV) and the equivalent variation (EV) for an increase in px  from 1 to 4. 4.   We have shown how the welfare costs of changes in a single price can be measured using compensated demand curves. This problem asks you to generalize this to price changes in two (or many) goods. (a) A way to show these welfare costs graphically would be to use the compensated demand curves for goods x andy by assuming that one price rises before the other. Provide the graphical illustration for the compensating variation for two alternative assumptions: (1) px increases first and py increases next; (2) py increases first and px increases next. Make sure to include all relevant labels (b) Use the formal expression for the compensating variation (i.e. CV written in term of expenditure functions) to show that the order in which the price changes are considered does not matter.

$25.00 View

[SOLVED] FR2209 Coursework instructions 2023/24

FR2209 Coursework instructions: 2023/24 Below are the instructions for the coursework. Output: create a  10-page  report,  containing  the tables, statistics,  plots  and  explanations requested below. NOTE: doing the coursework exercises in Python and submitting your code is worth 15% of your group mark. You can use any other software you like, but you will lose this 15% mark. Deadline: noon, London time, on Friday 12th  April 2024. You will need to use the data contained in this file: FR2209_assessment_data.xlsx.The data you have been given is monthly and contains (adjusted) USD prices of 1000 major global stocks listed in developed countries for the last 10 years. Where required, assume that the risk-free rate is 0.1% per month. Perform. the following statistical tasks: 1)   Select, at random, 6 (NOT more or less) individual stocks from the set you have been given. Briefly describe the industries these companies operate in. NOTE: groups should clearly state their selected 6 stocks by their ticker/identifier as in column 1 of FR2209_assessment_data (NOT company names) on the frontpage of their report. Note also that you probably don’t want to choose any stock with a negative mean return over the 10-year period. 2)   Create monthly returns for each stock and compute and tabulate mean returns, return standard deviations and a Sharpe ratio for each stock. Comment on these numbers, comparing them across stocks. 3)   Create and tabulate the covariance matrix of returns for your stocks. Comment on the sign and size of the covariances. [Before commenting on the size of the covariances, one might want to convert them into correlations.] Now perform. the following portfolio selection exercise: 4)   Create  (in  Python,  Excel  or  other  statistical  packages)  a  set  of  long  only  portfolio weight vectors, where the weight on each stock ranges between 0 and 1.00, where weights  always  sum  to  1.0  and  where  weights  cover,  in  a  systematic  fashion,  all possible and permissible weight combinations. [For example, you might choose to consider portfolios where weights are multiplies of 0.10 e.g., the vector (0.1, 0.6, 0.0, 0.3).] Describe how you have done this step in your report. 5)   For each portfolio, compute the expected portfolio return, the standard deviation of the portfolio returns and the portfolio’s Sharpe ratio, under the assumptions that the historical mean returns and return covariance matrix adequately represent what one might  expect  in  the  future.   Plot  the  expected  return  and  standard  deviation combinations on a graph to show the feasible set of portfolios. Describe the features of the plot and relate to the statistics on the individual stock returns. 6)   From your set of portfolios, recommend a portfolio containing only stock to each of the following investors. In each case describe the portfolio weights and the resulting portfolio return statistics and give some intuition as to why the weights take the values that they do. Also mark the location of the portfolio on the plot you created above. o An investor who simply wants to minimise risk. Call her portfolio M. o An investor who wants to maximise Sharpe ratio. Call her portfolio S. 7)   A third investor wants to hit an expected return target of 1% per month. Tell her how to optimally select a stock portfolio (from the set you constructed earlier) and how to mix that portfolio with the risk-free asset so as to hit the return target. Show her that this strategy is superior to a strategy where she only holds stock. 8)   Using the matrix results in the lecture materials, derive and present the equation that generates the weights of the portfolios on the portfolio frontier. Superimpose a plot of the frontier on the picture that you created in your answer to question (5). 9)   Perform. the following performance measurement tasks. a.   Consider the portfolio from (6) that maximises the Sharpe ratio. Compute the returns on this portfolio in each month of the sample. Plot them and describe the statistical features of the return series. b.   A data spreadsheet (i.e., FR2209_assessment_data_factors.xlsx) has been uploaded to Moodle that contains monthly percentage point returns for a set of five international risk factors for developed markets for our sample period. These are the excess return on the market (XSMKT),asize factor (SMB), avalue factor (HML), profitability (RMW) and an investment factor (CMA). Plot the returns on the factors and describe any interesting features. c.    Run a multivariate regression of your portfolio returns (from part (a)) on the factor returns (from part (b)). Present the results, interpret them statistically and then describe the implications of the regression for the risks that  an investor in this portfolio faces. Excel hints The following functions might be useful in computing basic statistics plus portfolio risk and return if you want to do it in a reasonably efficient fashion in Excel : •   AVERAGE •   STDEV.S •   COVARIANCE.S •   TRANSPOSE •    MMULT Doing regression in Excel is fairly straightforward. There are many resources available on the web to describe how to run a multivariate regression in Excel.

$25.00 View

[SOLVED] Module 03 Lab 01 AD 688 Web Analytics

Module 03: Lab 01 – AD 688 Web Analytics Module 03: Lab 01 Important Reminder All tasks for this lab must be performed on your AWS EC2 instance. Ensure you have accepted the GitHub Classroom assignment and cloned your repository before proceeding. Step 1: GitHub Classroom Repository To complete this assignment, you must first accept the GitHub Classroom Lab 05 request. Once accepted, follow the instructions below to clone the repository and start working. Step 2: Clone the Repository on Your EC2 Instance Option 1: Using Terminal 1. Open your terminal in VS Code. 2. Clone the repository: git clone https://github.com/YOUR_USERNAME/YOUR_REPO_NAME.git cd YOUR_REPO_NAME 3. Ensure all work is performed inside this cloned directory. Option 2: Using VS Code Git Source Control 1. Open Visual Studio Code. 2. Click on the Source Control tab on the left panel. 3. Click on Clone Repository. 4. Paste the repository URL: https://github.com/YOUR_USERNAME/YOUR_REPO_NAME.git . 5. Select a local directory where you want to store the repository. 6. Once cloned, open the folder in VS Code and start working within it. 1 Setting Up Your AWS EC2 Environment Before starting the assignment, you need to set up your EC2 instance with the required Spark ML and SQL packages. 1.1 Update and Upgrade System Packages Run the following command to ensure your system is up to date and remove unnecessary packages: sudo apt update && sudo apt upgrade -y && sudo apt autoremove -y 1.2 Install Java and Scala Ensure Java and Scala are installed since Spark depends on them: sudo apt install -y openjdk-11-jdk scala 1.3 Install Apache Spark wget https://dlcdn.apache.org/spark/spark-3.5.4/spark-3.5.4-bin-hadoop sudo tar -xvf spark-3.5.4-bin-hadoop3.tgz -C /opt/ 1.4 Set Environment Variables echo "export SPARK_HOME=/opt/spark-3.5.4-bin-hadoop3" >> ~/.bashrc echo "export PATH=$SPARK_HOME/bin:$PATH" >> ~/.bashrc source ~/.bashrc 2 Objective In this assignment, you will use Spark SQL to query job postings data from the Lightcast dataset. You will: 1. Load the job postings data into a Spark DataFrame. 2. Register the DataFrame. as a temporary SQL table. 3. Run SQL queries to explore job roles, salaries, locations, and trends. 4. Save the query results as either a Quarto ( .qmd ) or Jupyter Notebook ( .ipynb ). 3 Tasks 3.1 Step 1: Create Your Analysis File and Configure Git Ignore 3.1.1 Preferred Language: Python For this lab, Python is the preferred language for Spark SQL queries. However, if you are comfortable with R, you may use SparkR. Follow the R Spark SQL resource for guidance. 3.1.2 Using a Virtual Environment (Recommended) Since managed EC2 instances may require package installations via apt install python3-xyz , it’s best to use a Python virtual environment to avoid conflicts: python3 -m venv .venv source .venv/bin/activate This will create an isolated environment where you can install packages without affecting the system Python. 3.1.3 Install Python Dependencies Activate your virtual environment before installing: source .venv/bin/activate pip install pyspark pandas jupyter notebook matplotlib plotly seaborn 3.1.4 Verify Spark Installation pyspark --version If the setup is successful, you should see the Spark version. 3.1.5 Configure Git Igno 1. Add large dataset files (e.g., lightcast_data.csv ) to .gitignore to prevent pushing them to GitHub. 2. Make sure to add the file in gitignore first and then commit and sync. 3. MAKE SURE TO SYNC AFTER YOY COMMIT. THIS COULD CAUSE ERROR 3.2 Task: Choose Your File Format 2. Inside your repository, create either: A Quarto file ( spark_analysis.qmd ), or A Jupyter Notebook ( spark_analysis.ipynb ) 3. If using Jupyter Notebook, add the following YAML metadata in the first cell: --- title: "Spark SQL Job Data Analysis" author: "Your Name" format: html embed-resources: true date: "2020-02-25" date-format: long execute: echo: true --- 4. Only submit one of the two files ( .qmd or .ipynb ). 3.3 Step 2: Load and Prepare the Dataset 1. Load the Dataset into Spark: gdown https://drive.google.com/uc? id=1V2GCHGt2dkFGqVBeoUFckU4IhUgk4ocQ you can also copy the file from the lab 4 folder using cp ../lab04- yourgithubusername/lightcast_job_postings.csv . Read the CSV file into a Spark DataFrame. Register it as a temporary SQL table. from pyspark.sql import SparkSession # Start a Spark session spark = SparkSession.builder.appName("JobPostingsAnalysis").getOrCreat # Load the CSV file into a Spark DataFrame. df = spark.read.option("header", "true").option("inferSchema", "true") # 3. Register the DataFrame. as a temporary SQL table df.createOrReplaceTempView("jobs") 2. Verify the Data: Display the first five rows. Show the schema (column names & data types). # Verify the Data # Display the first five rows df.show(5) # Show the schema (column names & data types) df.printSchema() 3.4 Step 3: Run Spark SQL Queries Answer the following queries using Spark SQL. Make sure your code is visible and the html displays output correctly. For each query: Write the SQL statement in your file. Display the query results in a structured format. Briefly explain the insights from the results. For Example: Find the number of job postings for each employment type and order them in descending order. # Run a Spark SQL query to count job postings per employment type job_counts_by_type = spark.sql(""" SELECT EMPLOYMENT_TYPE_NAME, COUNT(*) AS job_count FROM jobs GROUP BY EMPLOYMENT_TYPE_NAME ORDER BY job_count DESC """) # Show the result job_counts_by_type.show() We can retrieve the information from Job Postings table by counting the unique job ids for each employment type.There are 3686 Full time jobs and 5635 Part time jobs in the dataset. Here are the questions: 1. How many job postings we have in the dataset? 2. Find the top 5 most common job titles 3. Find the average salary for each employment type 4. What five states have the most job postings 5. Calculate the salary range (max-min) for each job title in a California 6. What top 5 industries have the highest average salaries, and ,more than 100 job postings? 3.5 Submission Instructions 1. Commit and Push Your Work Using VS Code Source Control. 2. Submit Only Your GitHub Repository Link on Blackboard. Resources and Installing R Apache Spark SQL Documentation Quarto Documentation Jupyter Notebook Documentation R Spark SQL Guide Installing R and Adding Packages If you prefer using R instead of Python, you can install R and necessary packages on your EC2 instance. Install R on Ubuntu sudo apt update sudo apt install -y r-base Verify R Installation Run the following command to check if R is installed correctly: R --version Install R Packages Using Ubuntu Command Line You can install R packages directly via the command line by using the Rscript. command: sudo Rscript. -e 'install.packages("tidyverse", repos="http://cran.rstu For installing multiple packages: sudo Rscript. -e 'install.packages(c("sparklyr", "dplyr", "ggplot2"), r Connecting R with Spark To use Spark with R, install sparklyr and configure the connection: install.packages("sparklyr") library(sparklyr) spark_install() sc

$25.00 View

[SOLVED] Introduction to Structured Finance Spring 2025 Assignment2

Introduction to Structured Finance Spring 2025 Assignment #2 Part 1: Size and structure the project finance transaction described below Purpose: Finance the construction of a hospital using a series of revenue bonds. Issuer and Series Designation:   County of NYU Hospital Facilities Revenue Bonds Series 2025A Bonds Dated Date:         4/22/2025 Bonds Delivery Date:     4/22/2025 First Interest Payment Date:       11/01/2025 Gross Project Cost: $500,000,000 Drawdown Schedule: $25,000,000    monthly   drawdowns   starting   5/1/2025   through   and   including 04/01/2026, $20,000,000 monthly drawdowns starting 05/01/2026 through and including 12/01/2026, and $10,000,000 monthly drawdowns starting 01/01/2027 through and including 04/01/2027. Project Fund sizing protocol:       Net Funded assuming 2.00% investment rate.  Interest paid monthly. Debt Service reserve fund:          Maximum Annual Debt Service.  Invested at 3.50% with interest earnings during construction deposited to capitalized interest account. Interest earnings paid semiannually beginning 11/01/25. Capitalized interest requirement:    Fully capitalized through one month beyond end of construction period (5/1/27). Net funded assuming 2.00% reinvestment rate. Interest earnings paid semiannually beginning 11/01/25. First principal payment date:                       5/1/2028 Final principal payment date:                       5/1/2055 Interest rate and yields:                              See Below Underwriter’s Discount:              .70% of par Cost of Issuance:                            Fixed Costs @ $550,000 Variable Costs @ .065% of par Other Sources of Funds:               Issuer contribution (Equity) of $5,000,000 Other Uses of Funds:                     Bond Anticipation Note takeout of $18,500,000 Bond Structure: Level Annual Debt Service Part 2: How much equity would be required if annual hospital revenues available for debt service were projected to be $37MM and required annual bond coverage = 1.10x.

$25.00 View

[SOLVED] STATS 108 Statistics for Commerce 2025 Semester One

STATS 108 : Statistics for Commerce Science 2025 Semester One (1253) (15 POINTS) Course Prescription The standard Stage I Statistics course for the Faculty of Business and Economics or for Arts students taking Economics courses. Its syllabus is as for STATS 101, but it places more emphasis on examples from commerce. Course Overview An  ability to  gain  insight  from  data  enables  organisations  and  individuals  to  inform.  their  decisions,  make predictions and generate new knowledge. Advances in technology allow us new ways of thinking and reasoning in  the  physical  and  social  sciences,  and  inance.  The  purpose  of  this  course  is  to  introduce  students  to statistical investigation and analysis, and equip them with the skills and conidence needed to navigate the modern world of data. This is a core course in all majors/pathways for Statistics. It is also a supporting course for many other subjects (e.g. Psychology, Economics, Finance, Mathematics, Computer Science, Geography, Biology, Sociology, …). The course covers some material similar to NCEA statistics but at a higher level and more advanced material is also covered. While some Year 13 statistics or mathematics is helpful, we do not assume or require that you have any formal background in statistics or mathematics. If you have a limited background in mathematics, you may want to consider STATS 100 as an alternate course or as preparation before taking this course. Course Requirements Restriction: STATS 101, 102, 107, 191 Capabilities Developed in this Course Capability 1:   People and Place Capability 2:   Sustainability Capability 3:     Knowledge and Practice Capability 4:   Critical Thinking Capability 5:   Solution Seeking Capability 6:    Communication Capability 7:   Collaboration Capability 8:     Ethics and Professionalism Graduate Profile: Bachelor of Science Learning Outcomes By the end of this course, students will be able to: 1. Recognise different purposes and motivations for making data-based decisions and the consequences of those decisions for affected communities.  (Capability 2 and 5) 2. Describe ethical, responsible, and culturally-responsive data practices, acknowledging Māori Data Sovereignty.  (Capability 1 and 8) 3. Use data generated from a range of sources, considering how decisions made affect its quality, diversity, and quantity.  (Capability 1 and 8) 4. Develop models using data, representations and critical reasoning, considering the applicability and generalisability of models and model-based claims. (Capability 3) 5.  Select and apply appropriate technology to analyse data, considering automated and reproducible approaches.  (Capability 4) 6. Produce written summaries that communicate the uncertainty associated with data, and interpret and critique communications produced by others  (Capability 6 and 7) Assessments Assessment Type Percentage Classification Online tasks and quizzes 30% Individual Coursework Online test 20% Individual Coursework Final Exam 50% Individual Examination 3 types 100%     Assessment Type                                                           Learning Outcome Addressed   1 2 3 4 5 6 Online tasks and quizzes √ √ √ √ √ √ Online test √ √ √ √   √ Final Exam √ √ √ √   √ A minimum of 45% is required in the exam to pass, in addition to a minimum of 50% in your overall mark. Key Topics .  Module  1:   Modern   data   technologies   and   responsibilities   (Dataication,   Classiication,   Prediction, Randomisation) .  Module  2:  Making  and  evaluating  claims  or  decisions  based  on  data  (Estimation,  Quantiication, Confirmation, Explanation) .  Module 3: Designing and communicating about data (Variation, Distribution, Regression, Generalisation)

$25.00 View

[SOLVED] LSGI2341A Survey Adjustment

LSGI2341A Survey Adjustment Practical Assignment 1: Level Network Adjustment Date given: Thursday 6 February 2025 Date due:        Friday 7 March 2025 (5 pm) PREAMBLE This assignment consists of two parts. Your task is to adjust a level network using different computational tools. LEARNING OUTCOMES Upon successful completion of this assignment, you will: 1.   be familiar with writing height difference observation equations; 2.   be able to form. the system of level network observation equations in matrix form; 3.   be able to solve a level network by least-squares using a hand-held calculator/Excel; 4.   be able to solve a least-squares problem using MATLAB M-files; 5.   be able to analyse the results of a least-squares level network adjustment. LEVEL NETWORK ADJUSTMENT A small level network is illustrated in Figure 1.  The accompanying survey data are listed in Table 1. All observed height differences have been corrected for systematic error effects and can be considered uncorrelated. The height of the known station (benchmark)  1 is 257.891 m. Figure 1.  Level Network Configuration. Table 1.  Level Network Observation Data. The standard deviation for each observation, in millimetres, is given by the formula where L is the leg length in kilometres. PART 1 — PARAMETRIC MODEL ADJUSTMENT BY CALCULATOR/EXCEL Adjust the level network by the parametric method using your hand-held calculator. If you do not have access to a calculator that is capable of matrix multiplication and/or inversion, you can use Microsoft Excel instead (use the MMULT and MINVERSE functions). The following steps should be followed in order to accomplish this goal: 1.   Choose appropriate approximate values for the heights of the unknown stations. 2.   Write out the system of observation equations in matrix form. (r^ = A + w). Explicitly indicate all matrix elements and dimensions. 3.    Write out the covariance matrix of the observations Cl and weight matrix P. Indicate all matrix elements and dimensions. Using the matrix formulae presented in the lectures, solve for the estimated parameters, residuals and adjusted observations. Also calculate the estimated (a posteriori) variance factor and the covariance matrix of parameters. PART 2 — PARAMETRIC MODEL ADJUSTMENT BY MATLAB Compose a set of M-files to perform the parametric adjustment of the level network using MATLAB. All data given in Table 1 must be read in by your M-file from a text file (i.e., no hard-coding). Solve for the estimated parameters, residuals and adjusted observations and their respective covariance matrices as well as the estimated variance factor. Also calculate the correlation coefficient matrix for each covariance matrix. ANALYSIS AND QUESTIONS 1.  Using the covariance matrix of parameters, neatly plot the point error bars onto the network map shown in Figure 1 (by hand or digitally; see lecture 8, slide 9, for an example of point error bars; variable c mentioned on this slide can be set equal to  1). Explain the nature and cause of any trends that may be visible. 2.  Analyse the  correlation matrix of parameters. Explain what causes some pairs of parameters in this network to be strongly correlated and others to be weakly correlated. 3.   Which of the residuals are most correlated and how can this be seen from the correlation matrix of residuals? Explain why these residuals are most correlated by analysing the network geometry. 4.  What are the main weaknesses in the network and how could the network be improved? Give specific suggestions. SUBMISSION Before the due date and time you must submit two files by groups: 1)  A report in pdf format containing: o Results from Part 1 (maybe handwritten and scanned) o Results from Part 2 presented in tabular form. as demonstrated on the last page o Your MATLAB code o Answers to the questions 2)  A zip-file containing: o  All M-files and input files used to complete part 2, and the Excel file used to complete part 1 (if applicable) ASSESSMENT CRITERIA Your submission for this assignment will be marked based on the criteria shown in Table 2. A complete rubric is available on Blackboard. sults15%MATLAB results15%MATLAB code20%Analysis and questions40%Report presentation, grammar and spelling10%Total100%Part 1: Calculator/Excel results (by yourself)Approximateheights,systemof observati  and misclosure vector),covariancematrixof observations, weight matri estimated parameters, residuals, adjusted observations, estimated variance factor, covariancematrixofparameters rameters, residuals, adjusted observations, estimated variance , , ,correlation coefficientmatrixofadjustedobservationsMATLAB codeAll MATLAB code that you useAnalysis and questionsAnswers to all questions

$25.00 View

[SOLVED] Lab22 SocketsJava

Java Lab22: Sockets This lab practices with sockets, clients, and servers. Test each problem by running the server code first, then the client code. 1. Create a new Java project called Lab22. Download the two .java files into the src directory.There are two   main programs, one each in the two classes Client and Server.  Run Server first, then run Client. Their output will be in two separate console windows, so either line them up side-by-side or just switch between them. Some simple messages are hard-coded in each part, so you should see them printed in the console. 2. In the Client, change the hard coded message: prompt the user (that's you) to enter a String message, get it with a keyboard scanner; the code to send it to the server remains the same. Do the same on the Server side:   after a message is received, prompt the user (that's still you) to enter a reply; the code to return it to the client stays the same. 3. Now put both parts in loops. On the client side, if the user enters "QUIT", send that message, and then exit the loop. On the server side, when "QUIT" is received, print that out and exit the loop. 4. In the server code, add: int port = 8001; and change the parameter in the ServerSocket constructor to port instead of hard-coding 8001; test that. Then add this code right before the ServerSocket constructor call: if (args.length == 1) { port = Integer.parseInt(args[0]) } Next, go to Run->Edit Configurations. (If nothing shows up, run the server, then try Run->Configurations; same with the client.) Make sure Server is chosen on the left. In the Configuration tab on the right, type 8001 in the Program Arguments text box. Test it. 5. Now try the same thing with the client code – create two variables, String address and int port, set to the default values "localhost" and 8001; use them as the two parameters in the Socket constructor. Then add similar code as in #4 right before the Socket constructor, but the if test should be args.length == 2, and you'll set address to args[0] and port to args[1], converting it to an int. Go to Run->Edit Configurations; make sure Client  is chosen, and type "localhost" and 8001 in the Program Arguments text box. Test it. 6. Create this method in the server: public static void handleClient(Socket clientConnection) throws IOException. Copy and paste the code from the line after accept() to the end of the loop into this method; just in case, comment out the code you've copied in main instead of deleting it. Put a call to handleClient(clientConnection ) in its place. Make sure it runs correctly. The deliverable is the code up to this point. Zip the two files and upload them to Canvas. 7. Find another person to pair up with. One person will run their server code, and the other person will run their  client code. The person running the Server code should look up their IP address. On a Mac, click the Apple icon (upper left hand corner), choose System Preferences, and click the Network icon. The IP address should be listed. On Windows, click the Windows icon (lower left hand corner), click Settings, click Network&Internet, and click View Network Properties.The IPv4 address should be listed (don't use the /# at the end, just the four- part number). Tell your partner that address – they'll need to copy that number into the address field in #5. Does it work? 8. Now switch roles and try #7 again.

$25.00 View

[SOLVED] EECS31L Introduction to Digital Logic Design Lab Winter 2025 Lab 4

Introduction to Digital Logic Design Lab EECS31L Winter 2025 Lab 4 (100 Points) Through this course, we want to design a RISC-V Single Cycle Processor.   Here in this Lab, we will work on the Datapath part of the processor.  In Part 1, we again review the RISC-V datapath.  In part 2 we talk about how to design the data memory and in part 3 we talk about designing the datapath and finally in part 4 we test the datapath. 1    Datapath Figure 1:  RISC-V Datapath. Figure  1 shows the datapath of a RISC-V single cycle processor.  The instruction execution starts by using the program counter to supply the instruction address to the instruction memory.  After the in- structionis fetched, the register operands used by an instruction are specified by fields of that instruction. Once the register operands have been fetched, they can be operated on to compute a memory address (for a load or store), to compute an arithmetic result (for an integer arithmetic-logical instruction), or an equality check (for a branch).  If the instruction is an arithmetic-logical instruction, the result from the ALU must be written to a register. If the operation is a load or store, the ALU result is used as an address to either store a value from the registers or load a value from memory into the registers.  The result from the ALU or memory is written back into the register file.  The blue lines interconnecting the functional units represent buses, which consist of multiple signals.   The arrows are used to guide the reader in knowing how the information flows.  Since signal lines may cross, we explicitly show when crossing lines are connected by the presence of a dot where the lines cross. Some of the inputs (RegWrite, ALUSrc, ALUCC, MemRead, MemWrite, MemtoReg) are control signals which are derived by a module named  “Control” .  The control unit will be designed in Lab 5.  Here in this lab, assume you have all the control signals as inputs. Table 1 shows the list of Instructions that our Datapath supports.   Table 1 : Instruction Set.   Note:  along with the provided  instructions  in Table  1, your datapath needs to support "lw" and  "sw" instructions  too.   Table  2  and  3  show  format  of these  two  data-transfer instructions. Table 2 : Instruction Set (lw). Table 3 : Instruction Set (sw). For this part, you need to save these two instructions into the ”Instruction Memory” that you designed in lab 3.  Add the following two instructions into the instruction memory (as you did in lab 3): memory[18] = 32’h02b02823; // sw r11, 48(r0) alu result = 32’h00000030 memory[19] = 32’h03002603; // lw r12, 48(r0) alu result = 32’h00000030 , r12 = 32’h00000005 2    Lower Level Modules As it is shown in Figure 1, there is a top module  (Datapath) and nine lower-level modules  (FlipFlop, Adder, Instr mem, RegFile, Imm Gen, Mux (two instantiations), ALU, data mem).  Eight sub-modules are designed in the previous labs and in this lab, we start by designing the last sub-module which is data mem. Note:  32-bit ALU design source is provided for you in section 2.2.  You are welcome to use your own design source from Lab 2, if your design source got the full point. 2.1    Data Memory Same as the Instruction Memory (refer to the previous lab), the Data Memory in our processor is byte addressable. We can store 128 data each with 32 bits (128 x 32).  To address 128 x 4 = 512 bytes, 9 bits are required for address line.  These 9-bits come from the 9 LSBs of the output of the ALU (ALU Result). To read a data, we need an address (addr[8:2] which uses bit 2 to bit 8 from 9-bit addr) and the read enable signal (MemRead). To write a data, we need an address (addr[8:2] which uses bit 2 to bit 8 from 9-bit addr), the write enable signal (MemWrite), and a data to write (write data). Note:  Use the provided module definition to design your Data Memory.  Otherwise, your submission will not be considered for grading. Code 1:  Data Memory 1 ‘timescale 1ns / 1ps 2 // Module definition 3 module DataMem ( MemRead , MemWrite , addr , write_data , read_data ); 4 // Define I/O ports 5 6 7 // Describe data_mem behavior. 8 9 10 endmodule // data_mem 2.2    Mux The MUX on the output of the Data Memory will decide whether the writing data (to the register file) should come from the ALU or come from the Data Memory (refer to Figure 1).  You can use 2-to-1 Mux design from Lab 1. The only diference is that for this datapath, you need to consider size of input data and output data for each 2-to-1 Mux. For example, for the last 2-to-1 Mux in datapath (refer to Figure 1), size of data input is 32 bits. Follow the same rule for the Mux which is before ALU in the datapath. 2.3    32-bit ALU Code 2:  32-bit ALU   1 module alu_32 ( 2 input [31:0] A_in ,B_in , // ALU 32 bit inputs 3 input [3:0] ALU_Sel , // ALU 4 bits selection 4 output [31:0] ALU_Out , // ALU 32 bits output 5 output reg Carry_Out , 6 output Zero , // 1 bit Zero Flag 7 output reg verflow = 1’b0 // 1 bit Overflow flag 8 ); 9 reg [31:0] ALU_Result ; 10 reg [32:0] temp ; 11 reg [32:0] twos_com ; // to hold 2’sc of second source of ALU 12   13 assign ALU_Out = ALU_Result ; // ALU Out 14 assign Zero = ( ALU_Result == 0); // Zero Flag 15 16 always @ (*) 17 begin 18 verflow = 1’b0; 19 Carry_Out = 1’b0; 20 case ( ALU_Sel ) 21 4’ b0000 : // bit - wise and 22 ALU_Result = A_in & B_in ; 23 24 4’ b0001 : // bit - wise or 25 ALU_Result = A_in | B_in ; 26 27 4’ b0010 : // Signed Addition with Overflow and Carry_out checking 28 begin 29 ALU_Result = $signed ( A_in ) + $signed ( B_in ); 30 temp = {1’b0 , A_in } + {1’b0 , B_in }; 31 Carry_Out = temp [32]; 32 if (( A_in [31] & B_in [31] & ∼ALU_Out [31]) | 33 (∼A_in [31] & ∼B_in [31] & ALU_Out [31])) 34 verflow = 1’b1; 35 else 36 verflow = 1’b0; 37 end 38 39 4’ b0110 : // Signed Subtraction with Overflow checking 40 begin 41 ALU_Result = $signed ( A_in ) - $signed ( B_in ) ; 42 twos_com = ∼( B_in ) + 1’b1; 43 if (( A_in [31] & twos_com [31] & ∼ALU_Out [31]) | 44 (∼A_in [31] & ∼twos_com [31] & ALU_Out [31])) 45 verflow = 1’b1; 46 else 47 verflow = 1’b0; 48 end 49 50 4’ b0111 : // Signed less than comparison 51 ALU_Result = ($signed ( A_in ) < $signed ( B_in ))?32 ’ d1:32’ d0; 52 53 4’ b1100 : // bit - wise nor 54 ALU_Result = ∼( A_in | B_in ); 55 56 4’ b1111 : // Equal comparison 57 ALU_Result = ( A_in == B_in )?32 ’ d1:32’ d0 ; 58 59 default : ALU_Result = A_in + B_in ; 60 endcase 61 end 62 63 endmodule 2.4    Adder You can directly use the addition operator in Verilog ‘+ ’ to calculate the next pc (PCPlus4 in Figure 1).  Don’t use a separate adder module to calculate PCPlus4. 3    Higher Level Module Now that we have designed all of the submodules, we can use them as a component and design the Datapath.   Here again, you see the Datapath.   Blue lines are wires used to connect the submodules. Define these blue lines as “wire” and connect the components to complete the Datapath. Use the following code for the module definition of your Datapath. For Datapath code, we used lowercase letters for input/output naming. You need to use the exact code samples provided for you to design the Datapath and tb Datapath.  Otherwise, your submission will not be considered for grading. Careful!  * The port names in the diagram are slightly diferent than the port names in the module definition. When writing your code, please use the port names provided in the module definition.  (The diagram is just provided for visual purposes.). *Pay attention to the orientation of MUX in the figure (which signals are passed to D0 & D1) *PC increments by 4 bytes to get the next instruction because RISC-V uses byte-addressable memory. Code 3:  Datapath   1 module data_path #( 2 parameter PC_W = 8, // Program Counter 3 parameter INS_W = 32, // Instruction Width 4 parameter RF_ADDRESS = 5, // Register File Address 5 parameter DATA_W = 32, // Data WriteData 6 parameter DM_ADDRESS = 9, // Data Memory Address 7 parameter ALU_CC_W = 4 // ALU Control Code Width 8 )( 9 input clk , // CLK in Datapath figure 10 input reset , // Reset in Datapath figure 11 input reg_write , // RegWrite in Datapath figure 12 input mem2reg , // MemtoReg in Datapath figure 13 input alu_src , // ALUSrc in Datapath figure 14 input mem_write , // MemWrite in Datapath Figure 15 input mem_read , // MemRead in Datapath Figure 16 input [ ALU_CC_W -1:0] alu_cc , // ALUCC in Datapath Figure 17 output [6:0] opcode , // opcode in Datapath Figure 18 output [6:0] funct7 , // Funct7 in Datapath Figure 19 output [2:0] funct3 , // Funct3 in Datapath Figure 20 output [ DATA_W -1:0] alu_result // Datapath_Result in Datapath Figure 21 ); 22 23 // Write your code here 24 25 26 endmodule // Datapath Important Note:  we want you to have separate source files for each of the datapath sub- modules. 4    Test the Datapath Use the code bellow to test your Datapath design. Code 4: tb Datapath 1 module dp_tb_top (); 2 3 /** Clock & reset **/ 4 reg clk , rst; 5 always begin 6 #10; 7 clk = ∼clk ; 8 end 9 10 initial begin 11 clk = 0; 12 @( posedge clk ); 13 rst = 1; 14 @( posedge clk ); 15 rst = 0; 16 end 17 18 /** DUT Instantiation **/ 19 wire reg_write ; 20 wire mem2reg ; 21 wire alu_src ; 22 wire mem_write ; 23 wire mem_read ; 24 wire [3:0] alu_cc ; 25 wire [6:0] opcode ; 26 wire [6:0] funct7 ; 27 wire [2:0] funct3 ; 28 wire [31:0] alu_result ; 29 30 data_path dp_inst ( 31 . clk ( clk ), 32 . reset ( rst ), 33 . reg_write ( reg_write ), 34 . mem2reg ( mem2reg ), 35 . alu_src ( alu_src ), 36 . mem_write ( mem_write ), 37 . mem_read ( mem_read ), 38 . alu_cc ( alu_cc ), 39 . opcode ( opcode ), 40 . funct7 ( funct7 ), 41 . funct3 ( funct3 ), 42 . alu_result ( alu_result ) 43 ); 44 45 /** Stimulus **/ 46 wire [6:0] R_TYPE , LW , SW , RTypeI ; 47 48 assign R_TYPE = 7’ b0110011 ; 49 assign LW = 7’ b0000011 ; 50 assign SW = 7’ b0100011 ; 51 assign RTypeI = 7’ b0010011 ; 52 53 54 assign alu_src = ( pcode == LW || pcode == SW || pcode == RTypeI ); 55 assign mem2reg = ( pcode == LW ); 56 assign reg_write = ( pcode == R_TYPE || pcode == LW || pcode == RTypeI ); 57 assign mem_read = ( pcode == LW ); 58 assign mem_write = ( pcode == SW ); 59 60 assign alu_cc = (( pcode == R_TYPE || pcode == RTypeI ) 61 && ( funct7 == 7’ b0000000 ) && ( funct3 == 3’ b000 )) ? 4’ b0010 : 62 (( pcode == R_TYPE || pcode == RTypeI ) 63 && ( funct7 == 7’ b0100000 )) ? 4’ b0110 : 64 (( pcode == R_TYPE || pcode == RTypeI ) 65 && ( funct7 == 7’ b0000000 ) && ( funct3 == 3’ b100 )) ? 4’ b1100 : 66 (( pcode == R_TYPE || pcode == RTypeI ) 67 && ( funct7 == 7’ b0000000 ) && ( funct3 == 3’ b110 )) ? 4’ b0001 : 68 (( pcode == R_TYPE || pcode == RTypeI ) 69 && ( funct7 == 7’ b0000000 ) && ( funct3 == 3’ b111 )) ? 4’ b0000 : 70 (( pcode == R_TYPE || pcode == RTypeI ) 71 && ( funct7 == 7’ b0000000 ) && ( funct3 == 3’ b010 )) ? 4’ b0111 : 72 (( pcode == R_TYPE || pcode == RTypeI ) 73 && ( funct3 == 3’ b100 )) ? 4’ b1100 : 74 (( pcode == R_TYPE || pcode == RTypeI ) 75 && ( funct3 == 3’ b110 )) ? 4’ b0001 : 76 (( pcode == R_TYPE || pcode == RTypeI ) 77 && ( funct3 == 3’ b010 )) ? 4’ b0111 : 78 (( pcode == LW || pcode == SW) 79 && ( funct3 == 3’ b010 ))? 4’ b0010 : 0; 80 81 initial begin 82 #420; 83 $finish ; 84 end 85 86 endmodule Check the outputs (opcode, funct3, funct7, alu result) to see if they are correct.  Put a screenshot of the wave in your report.  Here you see the screenshot of the wave for the datapath: Note1:  After running the  simulation, you can add modules/signals from the scope tab to add more signals.   After adding the signal to the wave window you re-run the simulation to show the updated values. Note2: For Windows users, the use of the slicing operator on the ‘instruction’ wire to pass values to the register file input ports ‘rg rd addr1’ , ‘rg rd addr2’ and ‘rg wrt addr’ - as shown below - might result in a faulty output waveform. Code 5:  Problematic Register File Instance 1 RegFile rf (...; 2 . rg_wrt_addr ( instruction [11:7] ), 3 . rg_rd_addr1 ( instruction [19: 15] ), 4 . rg_rd_addr2 ( instruction [24: 20] ), 5 ... 6 ); Although this issue will not afect the correctness of the module’s functionality, it can be avoided by creating wires and assigning them to the sliced values and finally connecting these new wires to the ports of the register file as follows: Code 6:  Solution 1 wire [ RF_ADDRESS -1:0] rd_rg_wrt_wire ; 2 wire [ RF_ADDRESS -1:0] rd_rg_addr_wire1 ; 3 wire [ RF_ADDRESS -1:0] rd_rg_addr_wire1 ; 4 5 assign rd_rg_wrt_wire = instruction [11: 7]; 6 assign rd_rg_addr_wire1 = instruction [19: 15]; 7 assign rd_rg_addr_wire2 = instruction [20: 24]; 8 9 RegFile rf (...; 10 . rg_wrt_addr ( rd_rg_wrt_wire ), 11 . rg_rd_addr1 ( rd_rg_addr_wire1 ), 12 . rg_rd_addr2 ( rd_rg_addr_wire2 ), 13 ... 14 ); 5    Assignment Deliverable Your submission should include the following: •  Block designs  and testbenches.   (FlipFlop.v,  InstMem.v,  RegFile.v,  ImmGen.v,  Mux.v,  ALU.v, DataMem.v, Datapath.v. tb Datapath.v).  Use the files we provided for you with this lab. •  For the Mux design, consider size of data inputs and change your previous designs to match with the processor datapath (see Figure 1). •  A report in pdf format.  Follow the rules in the “sample report” . Note1: Compress all files (10 files :  9 .v files + report.pdf) into zip and upload to the CANVAS before deadline.  Make sure that you submit one  .zip file, otherwise your submission will not be considered for grading. Note2:  Use the code samples that are given in the lab description.  The module part of your code should exactly look like the code sample otherwise your submission will not be considered for grading.  

$25.00 View

[SOLVED] CEG8526 Hydrosystems Modelling and Management P3 Time series modelling exercises

CEG8526: Hydrosystems Modelling and Management P3: Time series modelling exercises Practical summary In this practical you will test your knowledge of time series models acquired in the lectures. By answering some short tutorial questions you will assess your understanding of the material and identify areas for clarification and reinforcement. You will also use a simple, pre-existing Markov model coded in Python to generate stochastic rainfall time series and explore how the parameters of the model change the characteristics of the simulated series. Aim and learning outcomes By the end of this practical you should be able to: •    Understand the principles of Markov models and their applications in modelling rainfall time series. •    Use a pre-built Markov and rainfall generator model to simulate rainfall data based on historical input. •    Adjust the input parameters of the Markov model and rainfall generator to simulate various scenarios, including under climate change. •    Analyse the output from a Markov model and rainfall generator to derive meaningful insights about rainfall behaviour. •    Recognise the importance and limitations of using stochastic models in environmental sciences and engineering and for decision-making. Tasks 1 Short questions 1.1 Which of the following factors is typically simulated by a weather generator? a.  Temperature b.  Precipitation c.  Wind speed d.  All of the above 1.2 Weather generators are statistical models that replicate weather sequences based on historical data. True/False 1.3 A weather generator can be used to assess the impact of climate change on local agriculture. True/False 1.4 What type of data is essential for calibrating a stochastic rainfall model? a.  Satellite images b.  Historical rainfall records c.  Climate change factors for daily rainfall d.  Wind direction data 1.5 How does a stochastic rainfall model differ from a deterministic rainfall model? 1.6 What are the assumptions of the AR(1) model? 2 A simple Markov model Analysis tool You should use the Google Colab notebook (P3_Rainfall_model.ipynb) provided on Canvas for analysis and plots. It is recommended that you don’t use Microsoft Edge due  to caching issues. Upload the Python notebook file and input files provided on Canvas to your own Colab environment. Part A Review the code in Part A and then run the Markov model. 2.1 Which two probability distributions are sampled in the model? You might need to   look up the function np.random.rand() to find out one of these. What is each distribution used to represent? Run the code with the default values of pww and pdd to produce the time series and the next block of code to plot the frequency distribution. Next, run the model again. You should see that the time series is different even though you haven’t changed anything. Why is this the case? 2.2 Using the two plots shown, describe the frequency distribution of rainfall amounts. Now change the parameters of the Markov model so that this time pww = 0.9 and pdd = 0.1 and rerun the model. 2.3 When you change these parameters, how do the characteristics of the time series and distribution of rainfall change, and why? Next, see if you can download the simulated rainfall time series as a .csv file. Open the file in Excel and calculate the mean rainfall in the simulation. Then, go back to the model in your notebook and change the inverse_lambda parameter and rerun the model. 2.4 What do you note happens to the rainfall distribution? How does increasing and decreasing the parameter value change the rainfall distribution? Download this data (using a different filename) and calculate the mean rainfall for that data and compare it with your previous simulation with a different lambda_inverse value. How do the values compare? Part B The next section of code for an input rainfall series calculates the transition matrix, some summary statistics and also calculates the scale parameter of a fitted exponential distribution. Run this section of code 2.5 What are the 6 values in the first line of output? We can use these to see how well the time series simulation has reproduced the required characteristics of the time series from the previous section. What are the values of pww and pwd in the simulated series? Part C So far, we have simply input the parameters we wanted to generate the rainfall series with. In practice, we want to produce simulations with statistical properties that match some observed time series. The next section of code calculates the wet/dry transition matrix based on an observed data series. Using Section C of the worksheet import the rainfall data for the Wansbeck catchment (wansbeck-daily-rainfall-4stations-2003-2012.csv, provided on Canvas). You will need to either copy this file into your workspace or use the code provided to upload it to your workspace automatically. Run the code in Part C and compare the summary statistics and transition matrix from the observed data and the simulated rainfall series. Run 3 simulations in total and write down the statistics for each simulation in the table provided. 2.7 Note down some comments on the skill of the model in reproducing the observed rainfall characteristics. 2.8 Why do the simulated statistics vary each time? Why might generating multiple simulations to produce an ensemble of rainfall series be useful? 2.9 What are the limitations of this model and how might you try to improve these simulations? Next download your simulated file, open it in Excel and calculate the mean rainfall amount. Note that because the input file had a header the output file has one too and this should not be included in your calculation. Now, we want you to perturb the rainfall to account for climate change. You will note a parameter called mean_rainfall_change_factor which we can use to scale the inverse_lambda parameter. Initially this is set to a value of 1.0 which means if applied there is no change. If we want to produce a simulated rainfall series with an increase in mean rainfall of 10% we can change it to a value of 1.1. If we wanted to decrease it by 10% we could set the value to 0.9. We can use this approach therefore to downscale coarse-resolution projections from climate models like those provided by UKCP18 by identifying a (series of) change factor(s) to perturb our model and then applying these to our simulation of an observed series for a specific location. Using output from UKCP18, identify a plausible set of climate change factors for mean precipitation for this catchment and then use those to perturb the model  by adjusting the mean_rainfall_change_factor. You could use the output shown  in Figure 1, or alternatively derive projections from the UKCP18 website yourself using an appropriate product. Download the simulation and open it in Excel and compare the mean rainfall amount for the previously downloaded file without climate change with this perturbed file. 2.10 The changes shown in Figure 1 represent a change factor calculated on annual amounts. Looking at Figures 2 and 3, which represent projected future seasonal changes, what do you note about the difference in projected changes for summer and winter? How might you improve the rainfall model to reflect these changes? What other features of rainfall might change in the future under climate change as well as the mean amount? How might you incorporate this in the model?

$25.00 View

[SOLVED] ARE 136 Guidelines for Final Project Marketing Plan

ARE 136 Guidelines for Final Project (Marketing Plan) Learning objective: Learning is not a spectator sport and requires a lot of practice. This group project gives you the opportunity to apply the material covered throughout the quarter and deepen your understanding. The goal is to create a compelling and consistent marketing plan that proposes and credibly supports specific IBP strategies aimed at increasing fruit or vegetable consumption (overall category demand or selective demand stimulation). Topic: Your marketing plan will be written for the produce industry, promoting one of the following: •      A fruit or vegetable category (e.g., Hass Avocados). •      A branded fruit or vegetable (e.g., Driscoll’s Berries). •      A fruit or vegetable product (e.g., Pom Pomegranate Juice). •      A more general public awareness campaign (e.g.,FNV campaign). Note: The examples given here are examples covered in class. I have and continue to work closely with the Hass Avocado Board and the Marketing team at Driscoll’s. You are free to come up with ideas, in any other produce category (including floral) or brand, introduce a new fruit or vegetable to the U.S. market, or promote a processed and packaged product that lists fruits and/or vegetables as an ingredient. Your plan can be pitched to a marketing order or board representing an entire product category, an existing brand, or to investors (e.g., you are pitching a start-up). If you are unsure whether your ideas fit within the parameters described here, please do not hesitate to ask and discuss your ideas with me. Structure: Your marketing plan needs to clearly communicate: To whom are you presenting your marketing plan (e.g., Hass Avocado Board, Driscoll’s, The Wonderful company, investors, etc.) and why (would they invest in your proposed marketing)? Who is your target audience for your promoted product and why (would these consumers buy your product)? What do you want to accomplish? What are your specific objectives and how do you plan to reach those goals? What are your main messages and how do you communicate them? How will you measure your success and prove that you reached your goals? Please follow the general structure either by creating sections or addressing each in a continuous text: 1.     Executive Summary and Objectives (Clearly state your objectives and your why.) 2.      Situation analysis 3. Budgeting (This section is optional) 4.      Strategy 5.     Execution 6.     Evaluation Please also note that you will have to support your creative ideas with a thorough analysis and data whenever available. Groups: You will be working on this project in groups of 5-8 students. Groups will be formed during the first section meeting and final group assignments will be posted on Canvas by the end of the second week. Feel free to assign specific tasks to each other, but please be advised that each of you need to be familiar with and contribute to all aspects of this project. Deliverables: Your group has two options when completing this final project: •               Introduce your ideas in a visually informative and creative presentation (during lectures). We have room for one presentation each (no more than 15 minutes and 10 slides) on March 4, 6, and 11. Groups can sign up on a first come-first serve basis by sending me an email or talking to me after lecture or section. If you choose this option, please makes sure you share your presentation slides at least an hour prior to your assigned lecture time that day. Please also upload your slides as a pdf file as your final project submission on Canvas. Please note that not all group members have to present, but everyone needs to contribute to the completion of the presentation (e.g., prepare the slides, etc.). •               Summarize your ideas in a persuasive written report (no more than five pages, not including a cover page and references). This document should be engaging and informative and you can use pictures and graphs to support your ideas. Documents need to be saved as a pdf file and submitted via Canvas before the beginning of class on Marhc 13 (1.40 pm). •   Other than restrictions on length mentioned here, there are no additional format requirements. For both, your presentation and written document. You are free to choose the format or style. that best supports your ideas and communicates your message. However, you must cite your sources properly.  When referencing in the text, please state author and year in parentheses. Also make sure you include a complete list of references at the end of your presentation or written document. While I do not require a specific style of   citations, please make sure that you include all information and do not just copy a link. You can check the guidelines and sample referencesprovided by the American Economic Association (AEA) for further guidance. If you include graphics or photos that are not your own, please also make sure to reference the source of this material, including AI generated images. •   More generally, we do not prohibit use of GenAI (e.g., ChatGPT and Gemini), but encourage you to be thoughtful and transparent. When carefully checked for accuracy, these tools can enhance your research and scientific writing, but overreliance on AI has been shown to hinder critical reflection and erode expertise. These tools are trained to find patterns and generate text by predicting the likeliest response based on their training data. They do not understand context or create original ideas, and they make a lot of mistakes. While they can help you brainstorm topics and provide basic editorial feedback, they are not a substitute to reviewing the literature yourself, critically reflecting on and continuously editing your writing. For guidance on how to cite your AI use, please see this link. If you are looking to improve your writing, I also encourage you to take advantage of the services offered by the AATC Writing Support Center. Grading: You will be able to receive continuous feedback on your ideas and revise your plans throughout the quarter. However, only your final submission will be graded. All group members will receive the same grade for the marketing plan. Please be proactive and reach out if one or more of your group members are not participating. If a member has not participated until week 8 of the quarter (Tuesday, February 25) he/she/they can be removed from the group upon request and will have to complete their own project. A 5-point penalty will be applied to individually submitted projects unless an exception or special permission was granted. Please note that the final project assignment created on Canvas includes a rubric for additional guidance regarding grading guidelines.

$25.00 View

[SOLVED] EDUC7053 Essay questions

EDUC7053 Essay questions 1- What impacts do globalization and/or the rise of English as global language have on ‘local’ education systems? You may address globalization and English as global language or choose to focus on one of them. In your answer, you can focus on any of these aspects: policy, curriculum, pedagogies, student experience. You can also choose a particular national context to ground your discussion. 2- How may global education prepare students for the increasing ‘interconnectedness’ among people and nations afforded by globalisation? In your answer, provide examples to support your argument. 3- In what ways are the goals of education being transformed in the current globalized and neoliberal times? In your answer, provide examples to support your argument. You may also focus on a particular local context to ground your answer. 4- What are the consequences of intergovernmental organizations influencing local educational policies and practices? In your answer, provide examples to support your argument. 5- How can definitions of literacy be expanded in order to educate future ‘global citizens’? What ‘literacies’ should be focused on? In our answer, consider ideas such as global competencies and/or global mindedness. You may focus on a particular local context to ground your answer. 6- Why is there the need to engage with the concept of Global Education critically? In your answer, provide examples to support your argument. 7- In what ways might the Global Youth Climate Strikes show evidence of ‘Global Education’ in terms of engagement with the challenges and opportunities afforded by globalisation? In your answer, provide examples to support your argument. 8- You may choose to develop you own essay question. If this is the case, please get your tutor’s approval for your chosen topic. 

$25.00 View

[SOLVED] EC-2565 ECONOMICS FOR BUSINESS Academic Year 2024-25Haskell

EC-2565 ECONOMICS FOR BUSINESS Academic Year 2024-25 School of Social Sciences EC-2565 Economics for Business This booklet contains: . an introduction to the module . details of all learning interactions . details of the core textbooks via the reading list .  information on assessment and feedback, including the coursework brief . an overview of the entire module . advice for good academic practice . Canvas guidance Module Overview Introduction This module builds a rigorous understanding of basic microeconomic and macroeconomic principles by combining theory and application to contemporary issues, such that students have a sound basis for progression to understand the context for business actions in the wider economy. Module Delivery This module will be delivered in a blended way: Study of pre-recorded lecture videos provided via Canvas, and relevant sections of a textbook, is expected ahead of each two-hour “Lecture” session. Students are advised to plan their week and allow sufficient time to study these materials in a timely manner. The live on-campus elements, consisting of a two-hour “Lecture” and a one-hour “Master Class” session per week, will have a focus on problem solving and active learning. A recording will be posted on Canvas following each session. The Master Class and Lecture for this module are timetabled back-to-back on: Wednesday / 9-12 / Y Twyni 002 A recording will be posted on Canvas within 48 hours following the classes. Please check Canvas announcements and the timetable data displayed on Publish/mytimetable for regular updates. Communication All information related to the module will be conveyed to students via Canvas  through  the Announcements feature which will also send an e-mail notification to student accounts. Learning Outcomes On completion of this module students should be able to: .     Explain and critically apply macroeconomic theory .     Describe the measurement and use of national income, and demonstrate how monetary and fiscal policies may influence national income and employment. .     Explain and critically apply microeconomic theory and concepts, derive market demand and supply schedules, and analyse changes in market price. .     Explain theories of perfect and  imperfect competition, critically appraise the case for free- market economics, and recount the sources and implications of market failure/ Transferrable Skills Oral/discussion skills Presentation skills Development of independence and autonomy Self-confidence Sustaining an argument Understanding other points of view Questioning skills Ability to relate well to others Organisational skills Learning skills Problem solving skills Reading Material Every effort has been made to provide the books and journals featured in the reading list for this module in digital and hard copy format via the library.  For more details of the resources available to support your studies please consult the Library Services Guide for Economics or watch this short recording by Subject Librarian, Naomi Prady. The full reading list for this module is available via Canvas. The core textbook for the module is: Economics (Fifth Edition). N. Gregory Mankiw and Mark P. Taylor. Cengage. A core textbook is only a starting point and provides introductory and background information only. Supplemental  reading  will  be  identified  at  each  lecture.  To  achieve  high  marks  in  this module students  will  need  to  do  background  and  supplemental  reading  as well  as  conduct their  own independent  research,  for  instance  through  the  reading  of  academic  journals,  into  the  topics identified. Assessment The assessment for the module is structured as follows: .     30% individual coursework assignment of maximal 1500 words .     70% in-person exam .     Resit – in-person examination worth 100% of the overall module mark Past papers/mock exam/example questions will be worked through in the final seminar session of the module. The exam for this module will comprise three sections. Section A, which accounts for 50% of the overall exam mark, will consist of twenty multiple choice questions. There will be a choice of one question from 2 in a Section B (which accounts for 25% of all marks on the paper) and a choice of one question from 2 in a Section C (which accounts for the remaining 25% of marks). If  you  fail  this  module  you  will  be  required  to  take  an  examination  during  the  supplementary assessment period.  The resit examination will be weighted as 100% of the overall module mark – the initial assessment weightings do not apply for resits. Submission in Welsh Any written work submitted as part of any assessment or examination may be submitted in Welsh, and that work submitted in Welsh will be treated no less favourably than written work submitted by you in English as part of an assessment or examination. Further information is available via MyUni. This is available to read in Welsh or English. Individual Coursework Assignment The coursework assignment for this module is an individual assignment, contributing 30% of the overall module mark. There are four versions of the assignment, determined by the last digit of your student number: .      Student numbers ending in 0–1: Complete Version 1 .      Student numbers ending in 2–4: Complete Version 2 .      Student numbers ending in 5–7: Complete Version 3 .      Student numbers ending in 8–9: Complete Version 4 Ensure you answer the correct version based on your student number, as answering the wrong version will result in a loss of marks. The versions differ in terms of the numerical questions and the support provided. Specifically: .      For versions with slightly more complex numerical tasks, ChatGPT 4.0 answer suggestions are provided below the questions. .      Important: While these suggestions may be helpful, be cautious when using them. ChatGPT can make errors, especially with the type of questions included in this assignment. Always verify your answers carefully. The assignments include two types of questions: 1.   Figure Completion Questions o  You will be asked to complete diagrams or graphs as part of your answers. Ensure your  responses  are  clear,  well-labeled,  and  adhere  to  the  specific   instructions provided for each figure. 2.   Multiple-Choice Questions (MCs) o  Some questions are multiple choice, where one or more answers may be correct. o  Important: Incorrect answers will result in a mark reduction, so carefully consider your choices before selecting. Avoid guessing if unsure. Answer Submission .      All answers must be written on the official answer sheets attached to this coursework brief. .      Do not submit answers outside of the designated answer sheets, as these will not be marked. .      Marking Criteria:  Each question is weighted proportionally, as indicated in the assignment brief. Pay attention to the allocation of marks to prioritize your effort accordingly. If you have any questions or require clarification, please contact your module coordinator before the submission deadline. Coursework Brief Version 1 (Student numbers ending in 0–1) Question 1 (15 marks): Paul loves sunrise mocktails and allocates a fixed portion of his weekly budget to them.  For  each  100  ml  of  fresh  Crete  orange juice,  he  enjoys  exactly 40  ml  of  real pomegranate grenadine. Adding  more  orange juice  or  grenadine to  his  preferred  mix  does  not increase its utility to him, so orange juice and grenadine are perfect complements. In the diagram below, draw two indifference curves representing Paul’s mocktail preferences: .      One curve should pass through a point corresponding to one glass of Paul’s preferred sunrise mix. .      The other curve should pass through a point corresponding to two glasses of his preferred sunrise mix. Question 2  (15  marks): Say  that  Paul  has  a  weekly  budget  of  £20  reserved  for  his  mocktail obsession. The prices of the ingredients are as follows: .      500 ml of fresh orange juice costs £10. .     200 ml of real pomegranate grenadine costs £10 as well. Draw Paul’s budget line in the above diagram. Clearly indicate how much orange juice Paul can purchase if he spends his entire budget on it. Similarly,  indicate how much grenadine Paul can purchase if he spends his entire budget on it. Question 3 (10 marks): At the initial prices and his budget, Paul can sip exactly five of his preferred sunrise mixes per week. The price of orange juice now changes. Based on the following scenarios: .      Initial price: £10 per 500 ml of orange juice. .     Alternative prices: £2.5 and £15 per 500 ml of orange juice. Which of the three diagrams best represents Paul’s demand for orange juice, given that he always maintains his preferred sunrise mix ratio of 100 ml orange juice to 40 ml grenadine? Note: Paul may also be willing to consume smaller glasses of his preferred mix, such as 50 ml of orange juice with 20 ml of grenadine, if necessary. Which one of the three diagrams is correct? A – Diagram 1 B – Diagram 2 C – Diagram 3 D – none of the above Question 4 (10 marks): Assume that the price of fresh orange juice remains fixed at £10 per 500 ml, but the price of grenadine changes from £10 to £30 per 200 ml. How many of his preferred sunrise mocktails will Paul be able to consume at this higher grenadine price? A – 4 mocktails (400 ml orange juice, 160 ml grenadine) B – 2.5 mocktails (250 ml orange juice, 100 ml grenadine) C – 1 mocktail (100 ml orange juice, 40 ml grenadine) D – none of the previous Question 5 (15 marks): Paul is not the only fan of sunrise mocktails in town. To simplify, assume there are 10 clones of Paul and 10 clones of Anna, each with individual demand functions as shown in the figures below: Task: Using these individual demand functions, draw the total demand curve for fresh orange juice in the diagram below. Question 6 (15 marks): Let us now focus on the supply side of orange juice. The table below provides cost data for the local suppliers of fresh orange juice. Your task is to: .     Derive suitable cost functions (e.g., marginal cost, average cost) from the given data, .     use  these  cost functions to determine the supply function for orange juice  (with quantity measured in litres), and .     add the derived supply function to the market diagram of Question 5 (not forgetting the supply at the lowest price P=0) Costs of orange juice sales Question  7  (10  marks): Using  the  supply  and  demand  curves  derived  from  your  answers  to Questions 5 and 6, analyze the impact of the following events on the demand curve for fresh orange juice. For each event, indicate whether it causes an upward shift of the demand curve. A – an increase in the price of grenadine B – a good orange harvest in Crete C – a reduction in the price of fresh orange juice D – an increase in Anna’s and Paul’s income E – Paul discovers coconut Jambo as his new favourite mocktail F – a construction site blocks the entrance to the local orange juice store Note that multiple answers may be correct. You will earn 10 marks for each correct answer. However, every incorrect answer will result in a deduction of 5 marks, with a minimum score of 0 for this question. Question 8 (10 marks): Consider now the market for grenadine, which is monopolized. The diagram below  illustrates  the  monopolist’s  demand  curve  and  marginal  revenue  curve.  Based  on  this information, your task is to: .      Draw a possible marginal cost curve for the monopolist that is consistent with the existing curves in the diagram. .      Ensure that your marginal cost curve aligns with the profit-maximizing condition (MC=MR) and reflects realistic cost behavior.

$25.00 View

[SOLVED] Market investment

1.  Values for the NASDAQ composite index during the 1500 days preceding March 10, 2006 and December 15, 2008 have been provided to you (Sheet 1).  Calculate the 1-day 99% VaR on March 10, 2006 and December 15, 2008, for a $10 million portfolio invested in the NASDAQ composite index using the methods indicated below. a)    (10) The basic historical simulation approach, b)   (10) The exponential weighting scheme with λ = 0.94, c)     (10) Extreme value theory with u = 0.03, 2.  Using the data describing a universe of risky assets (found in the attached spreadsheet - Sheet 2), and where there is a risk-free rate of 4.5%, find the optimal (tangent) PF of risky assets in three cases. Assume the variance of the market portfolio is 10. a) (10) Where no short sales are allowed. b) (10) Where short sales in the sense of Lintner are allowed. c)  (5) For the PF in a), where the investor wishes only 80% of the systematic risk of the optimal, risky PF, how much of the investors capital should be invested in the risk-free rate?

$25.00 View

[SOLVED] DIG103 Interaction Design

ASSESSMENT BRIEF 1 Subject Code and Title DIG103: Interaction Design Assessment Webpage Design Proposal Individual/Group Individual Length 8-10 page PDF document Learning Outcomes C) Apply fundamental visual composition and typographic principles relating to design for digital interactive environments D) Demonstrate an understanding of UX and UI research, analysis and practice E) Explain and summarise an interaction design proposal, process and outcome via documentation Submission By 11:55pm on Sunday of Week 3 Weighting 30% Total Marks 100 marks Context This Assessment aims for students to explore the design the UI of a webpage, taking into consideration fundamental UX principles. The webpage will then be developed and coded in Assessment 2 + 3. It is intended for graphic, communication, web and creative technology students to explore the UI and UX development process of web content. Task You will be provided with a client brief who require a webpage to be developed. Your task will be to investigate different competitors, analyse different strategies and develop a proposal design of what the product will look like. You should take into consideration various UX fundamentals, UI principles  and web standards. You will be provided with some assets of the brand to include in your proposal.  The goal is for your final webpage design to be thoroughly explored and considerate of the various factors that the brand and product require. The thinking behind your design decisions are important in this Assessment. Steps First, analyse who the users of the site will be. Consider and explore their values and goals of using the site as this will inform. the type of experience and UI that the site should have. Try to look for current cultural trends and nuanced perspectives here, rather than standardised design ideas. Second, research and analyse existing competitors for insights. Consider competitors in the same product line, but also those who hold the same value as the brand though in a different product line. Third, explore your initial design ideas using wireframes. Wireframes allow you to explore quickly and efficiently various compositions, and to consider the strategy and funnel that a composition can have for users as they navigate the page. These should be created in industry standard UI software. Fourth, create your final high-fidelity visual design of the home page. It should incorporate various ideas uncovered during your research into the users, competitors and wireframing. Industry practice is to present the finished design of a webpage to the client before coding it. The document submitted should present a clear summary of your findings, thoughts and ideas and should look and feel professional. Submission items Students will create and submit an 8-10 page PDF document: 1.    1 X SWOT Competitor analysis 2.    3 X Personas of target audience 3.    1 X Wireframe exploration 4.    1 X High-fidelity visual design Submission Submit your assessment task via the Assessment link in MyLearn. For both online and face to face classes, this assessment is due on Sunday of Module 3 (Week 3).

$25.00 View

[SOLVED] BEAM065 Bank Management Coursework 1

BEAM065 Bank Management Coursework 1 (40% of the mark for this module) Submission deadline: 17th March. Word limit: 2,000. This assignment consists of two options. YOU NEED TO CHOOSE ONE OPTION ONLY (Option 1 or Option 2). Option 1 (100 marks) - From Compustat (available through WRDS) or Orbis, download relevant data for at least 50 banks (depository institutions), from any country of your choice. Run at least 6 different regression models to examine how the banks’ ROA might be affected by: a)  Credit risk In particular, you could examine: -    The impact of non-performing loans -    The impact of loan-loss provisions b) Lending policies In particular, you could examine: -    The impact of lending growth -    The impact of the business model (e.g., different categories of loans) c) Size -    Usually, the literature considers the natural logarithm of total assets (or market capitalisation, for listed banks) as a proxy d) Capital structure In particular, you could examine: -    The impact of Tier 1 ratio -   The impact of the equity multiplier You should consider the relevant literature to justify your choice of proxy for each variable in your methodology discussion. You can use whatever econometric specification you deem appropriate. You can also add further control variables. Then, discuss whether your results are consistent with your expectations (by comparing them with the relevant literature), and what might be driving any unexpected result. IMPORTANT: For your analysis, you MUST use STATA. The reference list, tables (including notes and titles of the tables) and figures (including notes and titles of the tables) do NOT count towards the word limit. You do NOT need an introduction or conclusion in your report, but you can divide your report into three different sections, one to describe briefly your methodology and data (e.g, database used and sampleselection), and one for the discussion of the results. Suggested structure for the report: 1.1 Methodology 1.2 Data 1.3 Discussion of the results Option 2 (100 marks) - Answer two of the following four questions: a)  What are the potential consequences of interest rates near or below zero in terms of: •   Bank performance •   Bank risk-taking •   Bank lending (50 marks) (max 1,000 words) b) How can we estimate the potential impact of a new regulation on bank shareholders, before it is actually implemented? Provide some examples of recent academic papers that have attempted to do this, and briefly describe their findings. (50 marks) (max 1,000 words) c)  Describe how competition in banking markets might affect: •   Access to bank funding, and related cost, for SMEs (small-medium enterprises) •   Economic growth (especially at the local/regional level) (50 marks) (max 1,000 words) d) Describe how capital requirements might affect: •   Bank performance •   Bank credit risk (50 marks) (max 1,000 words) You may (but do not have to) support your analysis with quantitative examples and statistical analysis. IMPORTANT: The reference list does NOT count towards the word limit. You do NOT need an introduction or conclusion in your essay.

$25.00 View

[SOLVED] Homework Analysis of the Haunted Places Dataset

Homework: Analysis of the Haunted Places Dataset Due: Friday, March 14, 2025 12pm PT 1. Overview Figure 1: The Haunted Places dataset: mysterious reports strange sightings in and around the United States consisting of 21,983 rows x 10 columns. The full dataset can be found at https://www.kaggle.com/datasets/sujaykapadnis/haunted-places. In this assignment we will explore several of the topics discussed in the early portion of class – Big Data – MIME types and their taxonomy – Data Similarity – and so forth. To do this, we will leverage the dataset highlighted in Figure  1  –  a  set  of 21,983 reported mysterious haunted places. The posts contain several features which are highlighted below: ● City – the city in which the haunted place resides in. ● Country - The country where the place is located (always "United States"). ● Description  - A  text  description  of the  place. The  amount  of  detail  in these descriptions is highly variable. ● Location - A title for the haunted place. ● State - The US state where the place is located. ● State_abbrev - The two-letter abbreviation for the state. ● Longitude - Longitude of the place. ● Latitude - Latitude of the place. ● City_longitude - Longitude of the city center. ● City_latitude - Latitude of the city center. The Haunted Places dataset is a rich dataset with high variation in its features and properties. For example, as can be seen from Fig 2., there are 4,386 unique cities, and 9,904 unique locations in the dataset. These locations are always somewhere in the United  States mainland, including Alaska and Hawaii. Figure 2. Distribution of features in the Haunted Places dataset. One of the other important elements of the Haunted Places dataset is the description of the sighting. There are roads, streets, homes, commercial buildings, and other potentially haunted areas, with high variation. The descriptions of the haunted places gives the reader some inclination as to the modality of why the place is haunted, such as “If you take a camera…[sic] and take pictures, you’ll see orbs everywhere”, or “during the constructions of the train trestle, two men were found inexplicably hanging by suspension cords used to lift steel beams. Passersby have noted the sound of snapping necks…”, or “This street had so much activity that the neighbors were comparing stories that at night they experienced that beings were laying down on top of them”, and so on. These descriptions have rich modalities in the way that they were observed (sound, sight, smell, etc.) Additionally, some describe events that occurred (murder?) whereas others describe supernatural objects inhabiting dark areas (“orbs” near trees). Others describe eye witness accounts of what happened, and others even more so temporal characteristics that influence the evidence, such as happening during the day; or at night, or early evening. Some are repeating events, while others occurred once only, maybe twice. And  some of the events have multiple witnesses whereas some are described as a story from a singular witness or an “ancient tale” . Geographic areas, proximity to city center are also useful features. Did this sighting occur far out in the woods away from a city center where people are? Does that affect the witness reports? Or was this an event that happened right in metropolitan area. How should that effect how you weigh the evidence? Take the cardinality of witnesses for example. Are single witness sightings more likely to have a follow up performed?  You could use the number-parser Python tool to extract out the numerical value of number of witnesses: https://github.com/scrapinghub/number-parser as an example. Additional insight can be gleaned from exploring the temporal properties of sighting, which are semi-structured and only present in the description. However, using a tool such as https://github.com/akoumjian/datefinder you  can  easily  discern  if there  is  information about when these sightings occurred provided the information is extractable. Are there particular days of the week with sightings? What happens if you  slice down sightings related to geographic area and date? 2. Objective Exploring the Haunted Places dataset may lead you to ask several questions to build up a profile of evidence related to the event. You can group these questions into the following characteristics: Haunted Place Modality -          Evidence o    audio evidence? (As discerned by “noises”, “sound of snapping neck”, “nursery rhymes”) o    image/video or visual evidence? (As discerned by keywords in the description like “cameras” and “take pictures”, “names of children written on walls”, etc.) -          Is  it  haunted  due  to  some  event? (“When the railroad was built,  happened”) Temporal characteristics - Can time of day be discerned? Evening, afternoon, morning? - Can date be discerned? (Year, month, day?) - If the date/time is not directly present, can you find it by searching through description or location name on Google? - If not, perhaps set year to 2025, day to 01 and month to 01 Apparition type - Is it a ghost? - Is there a ghost description (male/female/child?) - Is it an orb? Unidentified Flying Object  (UFO)? Unidentified Aerial Phenomena (UAP)? Event type - Did someone die here? Murder? Natural? - How many people? - How many witnesses? Figure 3: Evidentiary questions you could ask about the Haunted Places dataset, deriving features that aren’t initially present. What ways could you explore this, by looking beyond this rich dataset and by applying lessons learned from class thus far where we have been studying the 5 V’s, MIME types of associated datasets, and large datasets and their characteristics? Could you join for example, the Alcohol abuse by state data from https://drugabusestatistics.org/alcohol- abuse-statistics/ and then determine if there is correlation between population level abuse of alcohol and the presence of these Haunted places? What about a dataset that gave you information  about  visibility  at  that  city/state/location  during  all  times  of the  year  to determine the quality of the visual evidence and observations taken by the reporters of these  Haunted  places?  There  are  multiple  datasets  providing  evidence  of  amount  of daylight    per     state,     such     as https://www.timeanddate.com/astronomy/usa and https://aa.usno.navy.mil/data/Dur_OneYear. What about police reports? Does the amount of police reports in a community have any  affect as to the connection between the community and murders and other violent crime  that may suggest a correlation between these supernatural events? For example you could  look   at    violent    crime   and    other    police   reports    by   state   with    this    dataset, https://projects.csgjusticecenter.org/tools-for-states-to-address-crime/50-state-crime-data/. And finally, recently, a website TarotCards.io, recently studied and revealed the most Haunted states by Google search volume and published the following statistics: Key Findings: Most Haunted State (Total Search Volume): California takes the top spot with 2,860 average monthly searches related to hauntings, followed by Texas (2,680) and New York (2,150). Least Haunted State (Total Search Volume): Alaska ranks last, with just 570 searches per month. Most Haunted State (Per Capita): Wyoming leads the way, with 98.89 searches per 100,000 residents, followed closely by Vermont (92.62) and North Dakota (81.12). Least Haunted State (Per Capita): California is the least haunted by search interest when adjusted for population size, with only 7.25 searches per 100,000 people. The Most Haunted States by Total Search Volume California – 2,860 searches Texas – 2,680 searches New York – 2,150 searches Florida – 2,080 searches Ohio – 2,070 searches The Least Haunted States by Total Search Volume 46. Hawaii – 650 searches 47. North Dakota – 640 searches 48. Vermont – 600 searches 49. Wyoming – 580 searches 50. Alaska – 570 searches The Most Haunted States Per Capita (Per 100,000 Inhabitants) Wyoming – 98.89 searches Vermont – 92.62 searches North Dakota – 81.12 searches Alaska – 77.71 searches South Dakota – 77.52 searches The Least Haunted States Per Capita (Per 100,000 Inhabitants) 46. Pennsylvania – 13.59 searches 47. New York – 10.86 searches 48. Florida – 9.05 searches 49. Texas – 8.56 searches 50. California – 7.25 searches The    full    dataset    can    be    found    here: https://docs.google.com/spreadsheets/d/1- ok5MWfRfGpO2nJkL3zcyDaw- PEEFKGOgiynJ3YiXS0/edit?gid=1333584719#gid=1333584719. What other features can you think of that would be useful to join to the Haunted places dataset? You will choose at least three additional publicly accessible datasets along these lines to join the Haunted Places dataset to, and you must add at least three new features per dataset that you join. The datasets you select may not all belong to the same MIME top level type – that is – you must pick a different MIME top level type for each of the three datasets you are joining to this Haunted Places dataset. Once the data is joined properly, you will explore the combined dataset using Apache Tika and an associated Python library called Tika-Similarity. Using Tika Similarity, you can evaluate data similarity (as discussed during the Deduplication lecture in class; and also during data forensics discussions). Tika  similarity will allow you to  explore and test different distance metrics (Edit-Distance; Jaccard similarity; Cosine similarity, etc.). And it will give you an idea of how to cluster data, and finally it will let you visualize the differences between different clusters in your new combined dataset. So, you can figure out  how  similar  Haunted  Places  are,  given  their  locations,  descriptions,  witnesses, apparition types, geographic proximity to certain towns, and other features, and explore your new augmented dataset. For example, you may ask, how many Haunted places were actually ghost sightings that occurred with low light visibility in the Fall season and if the sightings were nearby any airports and included reports from at least two witnesses? The assignment specific tasks will be specified in the following section. 3. Tasks 1.   Download and install Apache Tika a.   Chapter 2 in your book covers some of the basics of building the code, and additionally, see https://tika.apache.org/2.9.1/index.html b.   Install Tika-Python, you can pip install tika to get started. i.   Read             up             on             Tika             Python             here: http://github.com/chrismattmann/tika-python 2.   Download the Haunted Places dataset a.   We will provide you a Dropbox link in Slack for each validated team b.   Make a copy of the original dataset (because you are going to modify/add to it in this assignment) 3.   Create a combined TSV file for your Haunted Places dataset a.   Convert the CSV to TSV (here’s a simple example of how to do this with Python https://unix.stackexchange.com/questions/359832/converting-csv- to-tsv) 4.   Add and expand the dataset with the following features a.   Add a new feature called “Audio Evidence” . Set it to True if you match text  like “noises”, “sound of snapping neck”, “nursery rhymes”, False otherwise. b.   Add a new feature called “Image/Video/Visual Evidence” . Set it to True if you match text like “cameras” and “take pictures”, “names of children written on walls”, etc. c.   Add    a     new    feature    called     “Haunted    Places    Date”     and    use https://github.com/akoumjian/datefinder on the Description text to pull out dates. If you can’t find them, set the current date to 2025/01/01. d.   Add a new feature called “Haunted Places Witness Count” and use Number Parser   : https://github.com/scrapinghub/number-parser to   obtain    the number of witnesses (if possible to identify in the description). If unable to identify, set count to 0. e.   Add a new feature called “Time of Day”,  and try to discern “Evening”, “Morning”, or “Dusk” from the text. If not discernable, set to “Unknown” . f.   Add   a  new   feature  called  “Apparition  Type”  and  discern  from  the description  if it  is  a “Ghost”,  “Orb”,  “UFO”,  UAP”,  or  what  type  of apparition  the  Haunted  place  is  inhabited  by.  Perhaps  it  is  a “Male”, “Female”, “Child”, or “Several Ghosts” . Parse this from the description and set to “Unknown” if not discernable. g.   Add a new feature called “Event type” . Was it a murder? Did someone die here? Was  it  a  supernatural  phenomenon?  Discern  this by parsing the “Description” text and searching for keywords. h.   Join       the       Alcohol       Abuse       by        State        dataset,        here https://drugabusestatistics.org/alcohol-abuse-statistics/. i.    Join the amount of daylight by state dataset, here: i. https://www.timeanddate.com/astronomy/usa ii. https://aa.usno.navy.mil/data/Dur_OneYear 5.   Identify at least three other datasets, each of different top level MIME type (can’t all be e.g., text/*) a.   Check out places including: https://catalog.data.gov/dataset (Data.gov) b.   For each dataset, develop a Python program to join the data to your new Haunted Places dataset c.   For each non text/* dataset, be prepared to describe how you featurized the dataset d.   Each dataset that you join must contribute at least three features (in addition to the features you are adding described in part 5) e.   For each feature you add, be prepared to discuss what types of queries it will allow you to answer and also how you computed the feature 6.   Download and install Tika-Similarity a.   Read the documentation b.   You can find Tika Similarity here (http://github.com/chrismattmann/tika- similarity) c.   You        will        also        need        to        install        ETLLib,        here (http://github.com/chrismattmann/etllib) d.   Convert the TSV dataset into JSON using ETLLib’s tsv2json tool e.   Compare Jaccard similarity, edit-distance, and cosine similarity using Tika Similarity f.   Compare and contrast clusters from Jaccard, Cosine Distance, and Edit Similarity – do you see any differences? Why? g.   How  do  the  resultant  clusters  generated  highlight  the  features  you extracted? Be prepared to identify this in your report. 7.   Package your data up by combining all of your new JSONs with additional features into a single TSV (tab separated values) file where the columns represent the features and the rows are the instances of your sightings. 8.   (EXTRA CREDIT) Add some new D3.js visualizations to Tika Similarity a.   Currently Tika Similarity only supports Dendrogram, Circle Packing, and combinations of those to view clusters, and relative similarities between datasets b.   Download and install D3.js  i.   Visit http://d3js.org/ ii.   Review Mike Bostock’s Visual Gallery Wiki    iii. https://github.com/mbostock/d3/wiki/Tutorials iv.   Consider adding 1.   Feature related visualizations, e.g., time series, bar charts, plots 2.   Add functionality in a generic way that is not specific to your dataset 3.   See gallery here: https://github.com/d3/d3/wiki/Gallery 4.   Contributions will be reviewed as Pull Requests in a first come, first serve basis (check existing PRs and make sure you aren’t duplicating what some other group has done) 4. Assignment Setup 4.1 Group Formation You can work on this assignment in groups sized at minimum 2, and maximum 6. You may reuse your existing groups from discussion in class. Please fill out the group details in the form. provided after class. Only one form submission per team. If you have any questions, contact your TA/Course Producer via their email address with the subject: DSCI 550: Team Details. 4.2 Haunted Places dataset Access to the data is provided by a Dropbox link. The dataset itself is approximately 5.4Mb unzipped. You may want to distribute the data between your team-mates since the data is fairly small (for now). 4.3 Downloading and Installing Apache Tika The quickest and best way to get Apache Tika up and running on your machine is to grab the tika- app.jar from: http://tika.apache.org/download.html. You should obtain a jar file called tika- app-2.9.1.jar. This jar contains all of the necessary dependencies to get up and running with Tika by calling it your Java program. Documentation  is  available  on  the  Apache  Tika  webpage  at http://tika.apache.org/.  API documentation can be found at http://tika.apache.org/2.9.1/api. Since you will be using Tika Python, you will want to read up on the Tika REST API, here: https://cwiki.apache.org/confluence/display/TIKA/TikaServer.    The  Tika  Python  library  is  a robust REST client to the Java-side REST API. You can also get more information about Tika by checking out the book written by Professor Mattmann called “Tika in Action”, available from: http://manning.com/mattmann/. 5. Report Write a short 4-page report describing your observations, i.e. what you noticed about the dataset as  you completed the tasks. What questions did your new joined datasets allow you to answer about  the Haunted Places data and its sightings and additional features previously unanswered? What  clusters  were  revealed?  What  similarity  metrics  produced  more  (in  your  opinion)  accurate  groupings? Why? What did the additional datasets  suggest about “unintended consequences” related to Haunted Places? You should also clearly explain which datasets you used to join the  Haunted Places and how you extracted the new features from each dataset. Thinking more broadly, do you have enough information to answer the following: 1.          Are there clusters of Haunted Places with similar features, and all are murders occurring in the evening? 2.          Does the time of day of the Haunted Place original sightings matter? 3.          Are specific locations more likely to be influenced by alcohol abuse that cause more Haunted Places to be reported? 4.          Are specific keywords bigger indicators of the apparition type related to a Haunted Place? 5.          Is there a set of frequently co-occurring features that define a particular Haunted Place? 6.          What insights do the “indirect” features you extracted tell us about the data? 7.          What clusters of Haunted Places made the most sense? Why? Also include your thoughts about Apache Tika – what was easy about using it? What wasn’t? Note: Report should be written using 11 pt Times New Roman font, single column with single spacing. 6. Submission Guidelines This assignment is to be submitted electronically, by 12pm PT on the specified due date, via Gmail [email protected] for the Thursday class, or [email protected] for the Tuesday class. Use the subject line: DSCI 550: Mattmann: Spring 2025: BIGDATA Homework: Team XX. So, if your team was team  15, and you had the Thursday class, you would submit an email to [email protected] with the  subject  “DSCI  550: Mattmann:  Spring  2025: BIGDATA Homework: Team 15” (no quotes). Please note only one submission per team. ●          All source code is expected to be commented, to compile, and to run. You should have at least a few Python scripts that you used to join three other datasets, and what you used to extract   additional features. ●          Use relative paths {not absolute paths} when loading your data files so that we can execute your script/notebook files without changing everything. ●          If using a notebook environment, use markdown cells to indicate which tasks/questions you are solving. ●          Include your updated dataset TSV. We will provide a Dropbox or Google Drive location for you to upload to {you don't need to attach it inside the zip file}. ●          Also prepare a readme.txt containing any notes you’d like to submit. ●          If you used external libraries other than Tika Python and Tika Similarity, you should include those jar files in your submission, and include in your readme.txt a detailed explanation of how to use these libraries when compiling and executing your program. ●           Save your report as a PDF file (TEAM_XX_BIGDATA.pdf) and include it in your submission. ●          Compress all of the above into a single zip archive and name it according to the following filename convention: TEAM_XX_DSCI550_HW_BIGDATA.zip Use only standard zip format. Do not use other formats such as zipx, rar, ace, etc. ●          If your homework submission exceeds the Gmail's 25MB limit, upload the zip file to Google drive and share it with [email protected] (Thursday class) or [email protected] (Tuesday class). When submitting, please organize your code and data file as the directory structure shown: Data dataset1 {leave it empty for now} Source Code script1 notebook Readme.txt Requirements.txt Important Note: ●          Make sure that you have attached the file when submitting. Failure to do so will be treated as non-submission. ●           Successful submission will be indicated in the assignment’s submission history. We advise that you check to verify the timestamp, download and double check your zip file for good measure. ●          Again, please note, only one submission per team. Designate someone to submit. 6.1 Late Assignment Policy ●          -10% if submitted within the first 24 hours ●          -15% for each additional 24 hours or part thereof

$25.00 View