Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] BEEM062 Main Assignment Part B Brief

BEEM062 Main Assignment Part B Brief Abstract Your main assignment (80%) must be handed in by Friday 25th April 2025. It consists of two equally weighted parts: part A) A 1,600 word essay; and part B) A technical task-based assignment. This document outlines your tasks for Part B, which on its own contributes 40% to your overall module grade. For Python based tasks, you MUST solve them using Jupyter Notebooks, with each line of code stored. You will submit your assignment as a set of documents with your notebooks stored separately (with the .ipynb extension so that they can be easily verified). You are welcome to store your own code on your own github repository or elsewhere, but the .ipynb files must be submitted. Since you submit one file, the best way to do this is to submit your part A document, and include a link to a repository (you can use your onedrive or any other) that contains all your files in one folder.  Please do not submit large data files, and as ever, please pay attention to cyber security good practice (do not submit/share API keys). Note that any code files that are not .ipynb do not need to be submitted (you can use screen shots). 1    Gradient Descent with Python and NumPy              (30 marks) From a Jupyter notebook in Python, extract two time series of your choice from the CryptoCompare API, calling one xt  and the other yt. You do not have to use the CryptoCompare API (for example you can use the Nasdaq Data Link API). Depending on availability, choose your own frequency and time period, making sure that you can analyse both time series by treating them each as a one dimensional array of the same length, and make sure they are reasonably long (more than 100 observations).  Use the numpy library to find OLS estimates of α and β and in the following specification, where et  is an assumed white noise error: yt  = α + β + xt + et .  Analytically from standard OLS formulae .  By trial and error Machine Learning with a Gradient Descent (GD) Algo-rithm One challenge you may encounter is obtaining convergence, depending on the underlying data.  In words, describe why convergence to a Loss Function minimum may be di¢ cult to obtain in practice, and how this might be overcome. 2    Time  Series Forecasting with Regularisation                (35 marks) Create a Python notebook that produces a time series forecast of a financial time series of your choice.   Pick one single financial time series that you are interested in  (eg the Bitcoin price in Sterling), and conduct your analysis in stages: 1.  In the first stage, you must describe why you have chosen the series, and what factors you think affect it.   Where relevant,  describe any general trends, specific events, and what variables you think may drive it.  Distin- guish between variables that you should be able to obtain, versus those you cannot. 2.  Extract the relevant time series (you could use the cryptocompare API calls already used, or any other). 3.  Conduct analysis.  You must reduce a larger model with many variables (more than 10) down to a smaller model by applying L1/L2 Regularisation. Scikit learn has examples of implementing the combination of L1  and L2 called elastic nets.  Pay attention to stationarity, and use the Scikit learn implementation that uses cross validation  (CV) to pin down the hyper parameters (weights on L1 and L2).  Also, for the purposes of CV it is important to split time series appropriately by using Scikit learnís TimeSeriesSplit procedure. 3    Build an Experiment Platform (35 marks) In this section you are expected to apply the skills we developed building a smart contract using the sCrypt platform.  However, feel free to use any other approach you have learnt on other modules or elsewhere, as long as you achieve the same objective. Overall, design and implement a platform that allows a researcher to set up a simple experiment like the Ultimatum Game, playing out the experiment on the blockchain (in the case of sCrypt, it uses the BSV blockchain).  Your platform will play out the experiment on-chain, with the logic enforced by the smart contract. Choose your game from ONE of the following options: 1.  Ultimatum Game Player A proposes a split of a pot (e.g.  1000 satoshis).  Player B either accepts or rejects it.  If accepted, both receive as proposed.  If rejected, neither gets anything. 2.  Dictator Game Player A decides the split of the pot unilaterally.  Player B has no choice but to accept. 3.  Trust Game (2-Step Transfer) Player A decides how much to send to Player B. The amount is tripled. Then Player B decides how much to return to A. 4.  Modified Ultimatum Game Same as Ultimatum, but Player B can only accept/reject if the offer is above a threshold (e.g., >300 satoshis). 5.  One-Way Puzzle Incentive Post a question or riddle on-chain.   The first participant who solves it and unlocks the funds using a contract-defined condition (e.g., a correct answer hash) wins. 3.1    Procedure Overall, proceed as follows: 1. Write up your overall design in words. 2. Write a smart contract that encodes your experiment logic (with sCrypt, you can then compile and test it locally). 3.  Deploy the contract (with sCrypt, you can deploy it to the test or main net using small amounts of satoshis, eg 1000). 4.  Build a simple front end (html/css/js) that shows how a participant could interact with the contract. 3.2    Submission For this task, include screen shots (not the code files) in your submission that show you set up the contract logic, compiled and deployed the contract, and any relevant evidence (eg the txids on test or main net, if using sCrypt).  You do not have to submit the actual code files, this part can all be based on screen shots. Note that you must not submit your own private keys used in any of the stages. For the front end part, again, there is no need to submit the actual code files, you can just submit screenshots to show it.

$25.00 View

[SOLVED] Linear decorrelation models

Linear decorrelation models Neural Computation 2024-2025. 9th April 2025 Practical info Organize your answers according to the questions; don’t merge them. Plots should include axis labels and units (either on the plot, or mentioned in the text), see my web page link. There will be a to-be-determined normalization factor between the number of points scored and the resulting percentage mark. You will find that some questions are quite open-ended. In order to receive full marks for those questions your answers need to go beyond running a simulation and making a plot. Instead, you should substantiate your explanations and claims,  for instance by doing additional simulations or mathematical analysis.  It should not be necessary to consult scientific literature, but if you do use additional literature, cite it. Copying results is absolutely not allowed and can lead to severe punishment. It’s OK to ask for help from your friends.  However, this help must not extend to copying code, results, or written text that your friend has written, or that you and your friend have written together. I assess you on the basis of what you are able to do by yourself.  It’s OK to help a friend. However, this help must not extend to providing your friend with code or written text. If you are found to have done so, a penalty will be assessed against you as well. Deadline will be announced via email and the website. Upload report and code used for question 3 on Moodle. Use any computer language you like. Model One common theory about neural processing in the retina is that it reduces correlations in the in-put.  We examine linear transformations that whiten / de-correlate the inputs.  Specifically, given  N dimensional input vectors x and N dimensional output we look for N × N matrices W , so that the output y = Wx, is uncorrelated. We also impose that the output covariance is normalized so that it has covariance matrix ⟨yyT〉= IN , where IN  is the N-dimensional identity matrix. Question 1  (5 points) On the moodle page you will find some black and white photos of natural scenes (all_images.zip).  Sample some 10000 images patches of some 10x10 pixels. Before sampling, normalize each image such that its mean pixel value is zero and pixel variance is one.  Plot the covariance matrix and comment on its shape.  [In Matlab/Octave you can use imread() to read image files.] Question 2  (5 points) Use PCA on the data to find  a whitening transformation. Show that the covariance matrix of the output indeed equals the identity matrix. Also plot a few (~10) of the receptive fields, and comment on their shape. Question 3  (5 points) Another, perhaps more natural, way to whiten the input is given in the lecture notes (page 74). Implement this and show again that the covariance matrix of the output equals the identity matrix. Compare the L1 norms of the weight matrix  (Σi,j|wij|) with the PCA solution. Why is the L1 norm relevant biologically? What other properties of the weight matrix could be important for biology? Question 4  (5 points, harder) Are there whitening matrices with even lower L1 norm? Question 5  (5 points, harder) Assume a simplified model with just 2 inputs and an input covariance  What are all the matrices that de-correlate this input? Which matrices have minimum L1 norm?

$25.00 View

[SOLVED] Stat 4914 Project 2 Applied Sports Statistics

Stat 4914 Project 2 Project 2:  Supervised and Unsupervised Learning The main focus of this project will be supervised and unsupervised learning. Using the ncaam2025 data files on Carmen, in addition to the bracket you submitted on March 20, reevaluate your picks to determine which picks seemed good and bad in hindsight. Limit your report to 15 pages, including reproducible code. Hide unnecessary code output (library loading, ggplot code, etc.) but include relevant exploratory and modeling code (collinearity screening, model build- ing, assumption checking, etc.). Caption figures and tables, cite all sources, and proofread your submission. a) Data Selection Use a subset of your choosing from the files provided on Carmen in the ncaam2025 folder: list.files ( "Datasets\ncaam2025") ##    [1]  "coaches.csv"                     "conf_team_mapping .csv"  "kenpom_defense .csv" ##    [4]  "kenpom_efficiency .csv"  "kenpom_height .csv"          "kenpom_misc .csv" ##    [7]  "kenpom_offense.csv"        "kenpom_pointdist .csv"    "kenpom_summary .csv" ##  [10]  "mm2002_2025 .csv"              "postseason .csv" Feel free to find and use additional, high-fidelity data of your choosing - please provide a reference. b) Exploratory Analysis Conduct an exploratory analysis of your data, including preliminary variable screening and exploratory plots. Your exploration should be thorough and provide high-level visualizations and interpretation in context. Consider several aspects of a successful college basketball team - offense, defense, experience, coaching, level of competition, etc. c) Unsupervised Learning Unsupervised learning can serve two purposes in this data set: first, to screen for important variables, and second, to as a proxy to determine which teams are more similar (and hence, more plausible to be successful in the tournament). Implement at least two unsupervised clustering methods: K-means, DBSCAN, t-SNE, hierarchical clustering, principal components analysis, or other methods of your choosing.  Note two is the minimum. Create appropriate visualizations of your unsupervised clusters. d) Supervised Learning Supervised learning can serve to predict teams’ success.  Because you don’t have the full win-loss data for the season, you must you another variable to measure success, such as total number of wins, tournament seed, or another metric. Implement at least three supervised learning methods.  Although methods like multiple regression, GLMs, and linear mixed models can be considered supervised learning, we considered those on Project 1 so while you can feel free to implement them here for comparison, please consider the following methods: nonlinear regression, splines regression, principal components regression, LASSO/Ridge, random forest, classifica- tion/regression trees, XGBoost, Naive Bayes, K-nearest neighbors, or other methods of your choosing.  Note three is the minimum. e) Analysis and Findings Provide a professional write-up of your findings, describing both the statistical and practical findings of your analysis.

$25.00 View

[SOLVED] Bridge 3 -RAB 1st Draft

Bridge 3-RAB 1st Draft Introduction After completing an Evaluating Sources worksheet,you are ready to flesh our your evaluations and give some serious consideration to whether or not these sources will help you complete your Zine,and--if so--how.To do this,you will complete two drafts of a Reflective Annotated Bibliography,or RAB. Instructions Familiarize yourself with the explanation of Reflective Annotated   Bibliographies (https://canvas.newschool.edu/courses/1825229/files/145524002?wrap=1) ↓ (htts:/canvas.newschool.edu/courses/1825229/fles/145524002/download?download_frd=1).Each one of the examples givenin this explanation represents ONE bibliography entry-you will need FOUR(or more)total.This means four entries that contain thee following: ·Summary: 。Summarize the source. If someone asked you what this source was about,what would you say?What is the purpose of this source?What topics are covered?You need to describe the source accurately and concisely in your own words. ·Assess/Analyze: 。After summarizing a source,you need to evaluate it.What makes this source unique?How does it compare with other sources in your bibliography?Is the information reliable?Is this source biased or objective?What is the goal of this source, and how does it achieve its purpose? · Reflect: 。Once you've summarized and analyzed a source,you need to ask yourself how it fts into your plans for Bridge 4. Will  this  source be helpful to you?How will this source influence/impact your Studio work?Does this source changed how you think about your topic? You will repeat this process THREE more times-once for each of FOUR sources in your Reflective Annotated Bibliography-this means that one of your sources from Evaluating Sources will not make it to the RAB,but that doesn't mean you can't use that     source for your Zine. Each RAB entry should be headed by a complete,Chicago-style. citation,but this is not required until the 2nd draft. EACH RABentry should be 2-3 paragraphs long. Submission You will submit your draft in two places: 1) Here,on  Canvas,as a  PDF 2) In our shared Google Drive folder,as either a Google doc or Word doc

$25.00 View

[SOLVED] Stat 4914 Project 1 Applied Sports Statistics

Stat 4914 Project 1 Project 1: Explaining an Outcome with Regression The main focus of this project will be regression analysis with hypothesis testing, with an emphasis on verifying assumptions, creating effective visualizations, and reporting in an accessible, professional format. Choose a dependent outcome variable over a season - such as wins, yards, or points scored - that you believe can be explained by independent variables - such as payroll, height, division, or position. Use regression to quantify and explain the relationship between the independent variables and dependent variable of interest. Your regression can be multiple linear regression or a generalized linear model such as logistic or Poisson regression. Limit your report to 15 pages, including reproducible code. Hide unnecessary code output (library loading, ggplot code, etc.) but include relevant exploratory and modeling code (collinearity screening, model build- ing, assumption checking, etc.). Caption figures and tables, cite all sources, and proofread your submission. a) Data Selection Choose a data set that meets the following characteristics: •  At least 50 observations •  At least one categorical predictor with three or more levels •  At least two continuous predictors •  A continuous, binary, or count dependent variable •  Sports-related; this has a wide interpretation and can include professional sports leagues (NFL, UEFA, etc.),  games and pastimes  (chess,  go,  etc.),  data  sets  related to  physical activity  (motion  sensors, kinesiology, etc.), or sociological (stadium design, viewership, marketing, fan perceptions, etc.). •  Free-use from a reliable, rigorous source •  Lots   of    options   can    be    found    at https://sportsandsociety.osu.edu/sports-data-sets, https: //vincentarelbundock.github.io/Rdatasets/articles/data.html, https://archive.ics.uci.edu/datasets, and https://www.kaggle.com/search. Provide an overview of your data set, including a description of context and the relevant sports-related information necessary to understand your analysis.  Assume your audience is unfamiliar with the main topic of your report. b) Exploratory Analysis Conduct an exploratory analysis of your data, including preliminary variable screening and exploratory plots. Include at least three exploratory plots, one of which has at least three variables represented.  Plots should be professional, presentation-quality. c) Model Building Build at least two distinct regression models to explain the relationships between your dependent variable and independent variables. d) Model Evaluation Assess how well your models meets assumptions.  Compare your models and choose one that explains your data the best. e) Analysis and Findings Provide a professional write-up of your findings, describing both the statistical and practical findings of your analysis.

$25.00 View

[SOLVED] DSCI 550 Building Visual Apps to Your Multimodal Haunted Places Data using Data Science Creating

Homework: Building Visual Apps to Your Multimodal Haunted Places Data using Data Science: Creating Data Insights Due: Friday, May 2nd, 2025 12pm PT 1. Overview Figure 1: Examples of Haunted places sightings and locations from MEMEX GeoParser. In the third assignment, you will create an interactive set of visualizations that show off your Haunted places data analysis and work you’ve done through the first two assignments using the Data Driven Documents (D3) framework. This may include maps of Haunted places compared to hours of daylight from assignment 1. It may include similarities of various objects identified in images, and other features you generated. It may include information extracted and generated from image captions, and/or geo locations extracted in assignment 2. In addition, you will deploy the MEMEX Image Space open-source application to explore your generated Haunted places images and find similarities between them and you will deploy the MEMEX GeoParser application to explore the locations present from your original data, and your newly generated Haunted places data features. You and your team will take these visualizations, and apps and create a comprehensive “mini site” to demonstrate as an example of the great work you did in exploring and investigating how to analyze social media data using data science. 2. Objective The objective of this assignment is to persist and make the great data science work you did exploring the Haunted places sightings data. You will use explicitly the Data Driven Documents framework (D3) and its set of gallery visualizations used to explore and interact with your data. We have built several template web sites in the past in the IRDS group, for example, see the one from 2018 for UFO research at http://irds.usc.edu/ufo.usc.edu and at GitHub at http://github.com/USCDataScience/ufo.usc.edu and also similarly the polar.usc.edu one at http://github.com/USCDataScience/polar.usc.edu.  You  can  explore  the  website  and styles there. Your job on this is to use this as a reference and add your work specifically under the Explore Visualizations tab and under the Gallery section of the website, by team name. You will create a snapshot image of your team’s work (that best represents your data and hard work e.g., like http://polar.usc.edu/images/team28.png), and then use this to link to your actual website with your D3 visualizations. You should make the visualizations       connected,       e.g.,       such       as       the       landing       page       here: http://polar.usc.edu/html/team28mime/index.html. You may need to summarize your TSV data from assignment 2, and/or assignment 1; to  aggregate it so it displays well in your visualizations, or to prepare the data for interaction. In doing this you must choose to ingest a subset of your TSV data into Apache Solr or  ElasticSearch, and then connect your D3 to those services. You may submit us a JSON  dump as part of your assignment that we can load into it after the assignment is over and  when you turn your assignment in so that your visualizations may live on. Additionally, in continuing with our content extraction from Multimedia theme, you will also explore and install the ImageSpace open source application built on the MEMEX program (http://github.com/nasa-jpl-memex/image_space). There is an integrated Wiki instruction page here (https://github.com/nasa-jpl-memex/image_space/wiki/Quick-Start-Guide-with-ImageCat). ImageSpace is an investigative forensic tool allowing you to search and compare images based on similarity using a variety of algorithm plugins including the Social Media Query Toolkit   (SMQTK) http://github.com/kitware/SMQTK,   and    the    Fast   Library    for Approximate Nearest Neighbors  (FLANN), https://www.cs.ubc.ca/research/flann/.  The application includes a backend called ImageCat that is an ETL/ingest application that can ingest 10s of millions of images, extract their EXIF metadata and perform OCR on them using Tesseract and Apache Tika. The ETL/ingest performed is into an Apache  Solr index. The resultant index is used by ImageSpace. Additionally,    you     will    deploy     the    MEMEX     GeoParser     visual    application (https://github.com/nasa-jpl-memex/GeoParser)  to  explore  the  location  information  in your data. GeoParser is a full stack web application that takes in documents, or data, and then analyzes all the mentions of locations in those documents, and then visualizes them on a map like below: The assignment specific tasks will be specified in the following section. 3. Tasks 1.         Take your TSV dataset and convert a subset ofthe data to JSON to use in D3. a.   You may need to write scripts to summarize your data for D3. As a start, consider  using  ETLlib   (http://github.com/chrismattmann/etllib)  and  its tsvtojson tool. 2.         Pick 5 visualization types from https://github.com/d3/d3/wiki/Gallery and create the  associated  Data  Insights  web  pages  and  associated  JSON  data  to  display  them showing off your dataset (see Task 1). Consider similarity, consider using the questions from Assignment 1 and Assignment 2 that you answered in your reports and how the D3 visualizations will help you answer them. a.   Develop scripts for summarizing and preparing your TSV datasets for D3 JSON conversion. b.   The scripts you write are part of your delivery for the assignment. Please provide documentation for each script. that you create to visualize the data using D3. Make sure that your scripts are portable and there are a simple set  of instructions  on  how  to  run  them.  Any  libraries  that  the  scripts depend on should be clearly indicated. 3.         Ingest your Haunted places data from TSV JSON you created in Tasks  1 and 2 into Apache Solr (http://lucene.apache.org/solr/) and/or ElasticSearch (http://elastic.co). Both have adequate documentation and are easily installed. Use the Docker installs for each of these, don’t build them from scratch. 4.         Install             Image              Space             via https://github.com/nasa-jpl- memex/image_space/wiki/Quick-Start-Guide-with-ImageCat. a.   Ingest  a  subset or all of your Haunted places images / data into Image Space using the provided instructions and scripts or the ones you write on your own using Tika-Python. b.   Browse and find similar images and use the ImageSpace search index and search the Image forensics and similarity (SMQTK). c.   Submit your Solr or ElasticSearch index by tarring it up and gzipping it. Also include your ImageCat indices. 5.         Install  MEMEX  GeoParser  and run it against a  subset of your TSV  data and location data from assignment 1 and 2. You can use this guide here as a starting point. 6.          (EXTRA CREDIT) Submit a Pull request and improve GeoParser, and/or Image Space. Improvements to the software will be considered for extra credit. 4. Assignment Setup 4.1 Group Formation You should keep the same group from your assignment one. There is no need to send any emails for this step. 5. Report Write a short 4-page report describing your observations. I am interested in answers to the below questions: 1.          Why did you select your 5 D3 visualizations? a.   How are they answering and showing off your features from assignments 1 and 2 and the work you did? 2.          Did Image Space allow you to find any similarity between the generated Haunted places images that previously was not easily discernible? 3.          What type of location data showed up in your data? Any correlations not previously seen, e.g., from assignment 1? Also include your thoughts about Image Space and ImageCat – what was easy about using them? What wasn’t? 6. Submission Guidelines This assignment is to be submitted electronically, by 12pm PT on the specified due date, via Gmail [email protected] for the Thursday class, or [email protected] for the Tuesday class. Use the subject line: DSCI 550: Mattmann: Spring 2025: DATAVIS Homework: Team XX. So, if your team was team 15, and you had the Thursday class, you would submit an email  to  [email protected]  with  the  subject  “DSCI  550:  Mattmann:   Spring  2025: DATAVIS Homework: Team 15” (no quotes). Please note only one submission per team. ● All source code is expected to be commented, to compile, and to run. You should have at least a few Python scripts that you used to convert your TSV v2 data to JSON, and also likely scripts to perform. ingestion into ImageCat and/or your own Solr or ElasticSearch. ● Use relative paths {not absolute paths} when loading your data files so that we can execute your script/notebook files without changing everything. ● If using a notebook environment, use markdown cells to indicate which tasks/questions you are solving. ● Include your updated dataset TSV. We will provide a Dropbox or Google Drive location for you to upload to {you don't need to attach it inside the zip file}. ● Include your updated Indices as specified in Task 5. We will provide a Dropbox or some other location for you to upload to. ● Also prepare a readme.txt containing any notes you’d like to submit. ● If you used external libraries other than Tika Python, you should include those jar files in your submission, and include in your readme.txt a detailed explanation of how to use these libraries when compiling and executing your program. ● Save your report as a PDF file (TEAM_XX_DATAVIS.pdf) and include it in your submission. ● Compress all of the above into a single zip archive and name it according to the following filename convention: TEAM_XX_DSCI550_HW_DATAVIS.zip Use only standard zip format. Do not use other formats such as zipx, rar, ace, etc. ● If your homework submission exceeds the Gmail's 25MB limit, upload the zip file to Google drive and share it with [email protected] (Thursday class) or [email protected] (Tuesday class). When submitting, please organize your code and data file as the directory structure shown: Data Source Code script/notebooks Readme.txt Requirements.txt Important Note: ● Make sure that you have attached the file when submitting. Failure to do so will be treated as non-submission. ● Successful submission will be indicated in the assignment’s submission history. We advise that you check to verify the timestamp, download and double check your zip file for good measure. ● Again, please note, only one submission per team. Designate someone to submit. 6.1 Late Assignment Policy ● -10% if submitted within the first 24 hours ● -15% for each additional 24 hours or part thereof

$25.00 View

[SOLVED] BFF3121/BFB3121 Investments and Portfolio Management Semester 1 2025

BFF3121/BFB3121– Investments and Portfolio Management Semester 1, 2025 Instructions for Portfolio Investment Assignment Note the following important points before proceeding to the document: This Document has three sections outlining the assignment requirements. Read each section carefully multiple times to understand the requirements. Discuss and divide the work among your team. Assessment Type: Group Weight: 20%. Marks: out of 125. Marks for each analysis and/or calculation is mentioned individually in each section of this document. Page/Word Limit: For the written report Maximum 5 pages. Check Word limits for specific question. Font: Times New Roman; Font size: 12; Line space: 1 Late Submission Penalty: Initial penalty is 5% if assignments are delayed by up to 72 hours. Then 5% penalty per day (including weekends and holidays) of the total marks earned. Students get zero marks if the submission is delayed by 7 days (100% penalty). Submission Guideline: Check out the submission guidelines (page 6) and follow accordingly. •    Follow the instructions and requirements as they appear in this document in conjunction with the Excel Assignment Template. •    All  the  calculations  and  analyses  MUST  be  done  via  Excel  Assignment  Template  which  is available in Moodle. This Excel spreadsheet must be submitted with the written report. •    All the return calculations should be presented as decimals, not percentages, and approximated to six decimal points. •    All  Excel works will  be checked to ensure correctness of the procedure and to validate your portfolio returns and performance. If your calculation goes wrong, your findings/interpretation in the report will also go wrong for which you will lose marks. •    You must supply your personal Excel for Portfolio Values Record. •    Follow the recommended structure for your written report. Don’t forget to sign and attach the team agreement inside your report. •    Teaching team will not provide feedback on your assignments or calculations prior to submission. They can only give you general guidance about your assignment criteria. Section A: Some Preliminary Workouts:                                                                        (20 Marks) See instructions and requirements inside the Excel Template – start with the “Getting Started” tab. To start your calculations and analysis, go to “Analysis Tab”. Some preliminary work will be required. Insert your weekly portfolio values, ASX200 and S&P500 values in the relevant columns. You need to collect ASX200 and S&P500 values by yourselves (5 Marks). Sources you  may  use  either  Yahoo  Finance  or  Google  Finance  for  capturing  the  necessary  data. Collecting wrong data will result in losing marks. Collect the Adjusted Closing Value for the end of each Week for both ASX200 and S&P500. “Calculations Tab” will require you to show calculations on various aspects related to your portfolio. Follow the requirements as presented in the tab. “Portfolio Values Tab” will require you to collect your trading history and portfolio values in AUD from your own  Excel  File  and then copy-paste them  in the  respective columns  in the  Excel Assignment Template (2 Marks). Make sure you are converting USD values into AUD for reporting all your weekly portfolio values. You were given $200,000 AUD to trade with, so converting all your P/L and portfolio values in AUD will make more sense. Rather than converting each position and their individual USD values, you may simply convert the Closing Portfolio Value for each week into AUD. Just add necessary rows/columns and exchange rate details in your file to show the calculations. There are numerous sources to find any historical foreign exchange values. This can be found from yahoo finance, google finance or even from IG itself. Check out this LINK (Use the adjusted closing). “IG  Trading  History  (CSV)  Tab”  –DOWNLOAD   the   CSV  from   IG  website   using  the  “Trade Analytics/History” option and copy paste the CSV file in this tab (3 marks). Please present the table in a neat and tidy way and format the table with proper heading, borders and fonts etc. as required. Supplying no or incorrect CSV will make you lose marks. “Extra Calculations Tab” – If you have any additional calculations that you think play significant role in your analysis, please show them here. Anything beyond required analysis is welcome as that may reflect any out of the box way of thinking. Organising the Excel Files: On the top corner of the “Getting Started” Tab, you need to provide the following information: (1) Your group number (2) List of your group members, student names and IDs and (3) IG Trade Account Details. You need to use necessary Excel functions/formulas in your calculations and analyses. Your Personal Excel File will carry 10 Marks which comprises of meeting each weekly trade requirements and instructions (i.e. IG trade alert), correct organisation of data, currency conversions and portfolio values and % of equity value. Your Personal Excel File for “Portfolio Value Recording” must be submitted along with the report and Excel Template.  If any of the  required  Excel files  is not submitted, you will get zero marks for the assignment. Your written report relies on excel calculations and outcomes. Section B: Excel Assignment Template Calculations                                                             (40 Marks) All the required formulas and calculations must be done in the Excel Assignment Template. All the calculation requirements are outlined in the “Analysis Tab” and “Calculations Tab”. All the return calculations should be presented as decimals, not percentages, and approximated to six decimal points. You should not change the format of the template provided. You may add additional columns/rows beyond the given template if needed. NO FORMULA NO MARKS. Benchmark Portfolio: Your portfolio must be benchmarked against the ASX200 Index (Australia) and S&P500 (USA). Risk and Return Calculation: From the Preliminary work, you have the following: Weekly and Monthly values and returns of your portfolio, ASX200 and S&P500 Index, and risk-free asset. Now using the above data, calculate the arithmetic average, variance and standard deviation of monthly returns of the portfolio, ASX200 and S&P500 Index and report them in the template.            (3 Marks) [Hint: Treat the monthly arithmetic return of your portfolio for further calculations where necessary]. Performance Evaluation: Analysis and Calculations Tab Recall that during your week 4-8 tutorials, you have learned how to calculate risk, return, how to run, interpret and analyse regression, applying Index and CAPM model and portfolio performance tools via excel applications. You are required to apply the relevant model(s) to run, analyse and interpret the regression output, relevant ratios and variables of your dataset. Follow the instructions in the Excel Template. Run both Rregressions in the “Analysis Tab”.                                              (5 Marks) Carefully read the following requirements to calculate and evaluate your portfolio’s performance with respect to ASX200 and S&P500: (9 x 3 Marks = 27 Marks) (i)    Calculate and interpret the Sharpe measure for your portfolio, ASX200 and S&P500. (ii)    Calculate and interpret the M2 measure for your portfolio. (iii)   Report and interpret the beta of your portfolio along with a reference to its statistical significance. (iv)   The Correlation Coefficient between your portfolio and each of the market (ASX200 and S&P500), and the proportion of the variability of portfolio return explained by the market movements. (v)    Calculate and show that Total risk is the sum of Systematic and Unsystematic Risks. Also, show that the variance of your portfolio is equal to the total risk of your portfolio (i.e. they closely match). (vi)   Calculate the following ratios (with respect to ASX200 and S&P500): a.   Treynor measure. b.   Jensen’s alpha. c.    Information ratio. (vii)  Calculate Expected return using CAPM. (viii) Calculate your 'Utility' if you have an aversion score of A= 3. (ix)   Plot your portfolio's return with respect to SML (ASX and S&P500). You may Expand the box if necessary. Nicely present all your calculations and tables in a neat & tidy way with necessary formatting.    (5 Marks) Section C: Writing the Report                                                                                        (65 Marks) Written Report Structure: This will be a standard report based on your trading history and analysis. When preparing the report, you need to refer to the Excel Template spreadsheet whenever necessary. It is important that you reproduce & interpret Excel results and any other necessary details while writing this report. This written report should be submitted as a pdf file. Report should start with a cover/titlepage (mention group number, student names and IDs). You then attach the team agreement in the next page. This will be then followed by a table of contents and an executive summary. A structured flow of the report is important. Report Structure and Team Agreement (3 Marks). Main Report should not exceed 5 pages (excluding cover page, team agreement, table of contents and executive summary). Anything beyond 5 pages will be disregarded and will not be graded. You may add a maximum of 3 pages of appendices which you may refer for any relevant discussions. However, these appendices will not carry any marks. Use footnotes/citations etc. where necessary. Executive Summary: (5 Marks) (Maximum 1 page) Reflect your learning in the executive summary section. This should include your trading experience, how your  experience  bridges  the  theory  into  practice and your overall exposure and understanding of investment and portfolio management. You can relate and reflect how you faced the real market turmoils, uncertainties, your investment strategies, any alternative strategies and risk management. As a beginner trader/analyst how this experience may shape your future investment and portfolio management goals. Portfolio performance: (Each Question 50-100 words) Measuring Portfolio Performance (6 x 2 = 12 marks): Using the relevant Excel Calculations, critically evaluate the performance of your portfolio and where necessary compare it with ASX and S&P500. (i)     Did your portfolio under/over perform based on M2? What you could have done to improve the overall performance? (ii)    Comparing your portfolio’s utility score for ASX vs S&P500, which one offers better value? (iii)   Based  on your Correlation Coefficient and  Beta value, can you interpret your portfolio’s risk exposure compared to market? Is your finding statistically significant?  (iv)   Interpret the values for Treynor and Information ratio for your portfolio. (v)    Based on your CAPM calculation, is your portfolio’s actual return above/below compared to ASX and S&P500? (vi)   Is your portfolio generating any alpha? Interpret any positive/negative alpha of your portfolio. Investment Strategy: (Each Question 150-250 words) Risk Management & Portfolio Rebalancing (5 x 4 =20 Marks): (i)   During the market instability, how did you manage your portfolio exposure? Did you use any traditional risk-hedging instruments (like investing into gold, bonds, and volatility indices) during the recent meltdown? If yes, how effective were those strategies? (ii)  During the market crisis, in terms of risk management, did diversification across asset classes, geographies, or sectors minimise/mitigate your portfolio losses? Or did systemic risks override your diversification benefits? (iii)  In light of your own trading experience, how should investors reassess their risk tolerance and asset allocation in the aftermath of the downturn? (iv) Is “buying the dip” still a viable strategy in a high-volatility, uncertain macroeconomic environment? Share and justify your opinion. For your portfolio, what investment strategies performed best during the crisis and why? Opportunities & Forward-Looking: (5 x 3 = 15 Marks) (i)   In your opinion, which sectors or asset classes are likely to offer the best rebound potential or defensive value post-meltdown? Justify your opinion. (ii)  How can investors balance short-term volatility against long-term fundamentals in their recovery strategy? How should you adjust your portfolio to reduce risks for future? (iii) As an active investor, can you identify undervalued assets during market downturns caused by trade disputes? Conclusion (5 marks):  (100-200 words) This section concludes your report. In this section you should discuss whether your portfolio strategy, diversification efforts and risk reduction strategies (specially during meltdoen) worked or not. In other words, you wrap up your findings, learnings and overall key intakes in this assignment. Presentation and Referencing (3+2=5 marks): Overall   report   presentation,   flow   of   the   report/responses,   professional   style,   Use   of   relevant graphs/tables/charts etc (where applicable); Reference to appendices etc. Refer to Learn HQ for relevant resources. For referencing, any standard style is fine. Note that you must submit CSV trading history and your personal Excel portfolio values in the relevant tab of Excel template, not in your appendix. Appendices are excluded from five-page limit. Cite any references used following a standard referencing style. Recall that, Executive Summary, Reference list and Appendix are excluded from five-page limit. Submission Guidelines Use the submission link available on Moodle site to submit your report and other documents. Only one submission per group. Before hitting the submit button, make sure that you have attached the following files: 1.  Written Report submitted as a pdf file. Don’t forget to add the team agreement within it. (Named as follows: BFF3121/BFB3121-Portfolio Assignment Report-Group Number) 2.  Excel Assignment Template File containing all calculations and analyses. You must use the template provided in Moodle for your assignment purposes. (Named as follows: BFF3121/BFB3121-Portfolio Assignment Excel-Group Number) 3.  Submit your own Excel File for Portfolio Value Recording. (Named as follows: BFF3121/BFB3121-Portfolio Values Record-Group Number)

$25.00 View

[SOLVED] ACCT 222 Management Accounting Semester One 2025

ACCT 222 - Management Accounting Semester One 2025 Whakamahuki | Course Description “Managers use cost and management accounting information to help them make different types of decisions. These include developing organisational strategies, creating operating plans, and monitoring and  motivating  organisational performance.  Higher-quality  decisions  are  achieved  by  using  higher- quality relevant information and decision-making practices. ” Eldenburg et al. (2025, p. 2) “Management accounting is the process of gathering, summarising and reporting financial and non - financial information used internally by managers to make decisions. ” Eldenburg et al. (2025, p. 4) “Ultimately, the challenge for management accountants and the management accounting function is to ensure that the organisational decision-making needs are appropriately matched  with the available management techniques, tools and practices. ” Eldenburg et al. (2025, p. 7) These quotes highlight the vital role of the management accountant in providing information for managers. ACCT 222 examines techniques and practices that management accountants can use in order to provide relevant information to help managers to make decisions. ACCT 222: Management Accounting is both content and skills driven. The objectives of the course are: •    To provide students with an understanding of management accounting theory and practice, including: Costing of products and services; Using management accounting information for decision- making; Planning and budgeting; Management control; Revenue management and pricing; and Performance measurement and evaluation •    To build the generic skills of students including: Oral and written communication; Critical and conceptual thinking; Problem-solving; and Analytical skills ACCT 222 is an essential course for those students wishing to complete the academic requirements of professional accounting bodies such as Chartered Accountants Australia and New Zealand. Also, ACCT 222 is a prerequisite course for ACCT 332: Advanced Management Accounting. Upon successful completion of ACCT 222 and ACCT 332, students will be able to demonstrate an understanding of contemporary management accounting theory and practice. Me whakaoti i mua | Prerequisite ACCT 102 Hua Ako | Course Learning Outcomes After successfully completing this course, students will be able to: 1.    discuss the nature and role of management accounting in organisations 2.    calculate the cost of products and services using various costing techniques 3.    discuss the nature and purpose of alternative costing techniques for products and services 4.    develop budgets and evaluate their purpose and uses, including behavioural implications 5.    calculate and explain budget and standard cost variances 6.    make non-routine decisions supported by both financial calculations and evaluation of non-financial issues 7.    discuss how an organisation’s strategy influences its plans, controls and actions 8.    calculate and analyse the performance of organisations using financial, non-financial and qualitative measures 9.    convey their views, analysis and recommendations to others using oral and written forms of communication Āhuatanga Tāura | Graduate Attributes By successfully completing ACCT 222, students will be achieving the Learning Objectives of the Bachelor of Commerce leading to the Graduate Attributes of being: •    critically competent in a core academic discipline of your degree •    employable, innovative and enterprising WhāingaAko | BCom Learning Objectives •    Students have an in-depth understanding of their majoring subject and are able to critically evaluate and, where applicable, apply this knowledge to topics/issues within the discipline. •    Students have a broad understanding of the key domains of commerce. •    Students will develop key skills and attributes sought by employers which can be used in a range of applications. •     Students will be aware of and understand the nature of biculturalism in Aotearoa New Zealand, and its relevance to their area of study and/or their degree. •     Students will comprehend the influence of global conditions on their discipline and will be competent in engaging with global and multi-cultural contexts. Mahi ā-Ākonga | Workload ACCT 222 is a 15-point, 12-week course and it is expected that the total workload for an average student, for background reading, lectures, tutorials, assignments and revision, will be 150 hours (or 10 hours per point).   To encourage active Iearning towards the achievement of the Iearning outcomes, an average student’s workload is detailed below: Activities Preparation Contact per week Total Lectures 3 hours per week 2 hours 60 hours Homework and tutorials 2½ hours per week 1 hour 42 hours Unstructured inquiry 1 hour per week 12 hours Term test 16 hours 2 hours 18 hours Final exam 16 hours 2 hours 18 hours   150 hours The activities listed above are discussed in depth in subsequent sections of this course outline, except for unstructured inquiry. To successfully complete this course, it is expected that students will carry out a substantial amount of unstructured inquiry into issues which are relevant to this course. There are numerous sources of information which may be of interest, for example: •     newspapers such as the National Business Review and The Press •     professional journals such as Acuity, Harvard Business Review and Strategic Finance •     websites such as www.stuff.co.nz, www.CFO.com, and wikipedia.org •     ACIS Department seminars Kauhau | Lectures ACCT 222 will have two lectures per teaching week. Please refer to the Course Information System for the lecture times and venues: http://www.canterbury.ac.nz/courseinfo/GetCourseDetails.aspx?course=ACCT222&occurrence=25S 1(C)&year=2025 The lectures are recorded, so you can listen to a lecture and watch the visual presentation if you miss a lecture, or if there is something you would like to go over again. Akoako | Tutorials Tutorials are held each week, beginning in the second week of the course. Information on tutorial times and venues is presented on the Course Information System. To enrol in a tutorial, go to My Timetable: https://mytimetable.canterbury.ac.nz/aplus/apstudent You must attend the tutorial group you enrolled for. Tutorials will be used to develop and discuss assigned homework and tutorial questions. The success of the tutorials depends on adequate preparation by students and active participation during class. Assessment related to tutorials is discussed in the next section. Ako | Learn Ako | Learn (http://learn.canterbury.ac.nz/) will be used to deliver lecture, homework, tutorial and homework material to students, and for submission of homework. Material such as lecture slides and handouts should be printed or downloaded prior to attending classes. Also, important information such as test hints will be detailed on Ako  | Learn. Students will have the opportunity to discuss course-related issues with the lecturer and other students using the Course Information Forum. Students are advised to become familiar with Ako  | Learn as soon as possible and log on to it regularly. Aromatawai | Assessment The course has four forms of assessment: homework, tutorial participation, the term test, and the final examination. Homework Most homework must be completed on the computer in Microsoft Excel and submitted through Ako  | Learn by 3 pm on Mondays. Formulas must be used for questions requiring calculations. The submitted file must     be your own work. Occasionally there is a quiz for homework, which must also be submitted before 3 pm on Mondays. The best 10 of the 11 homework assignments, worth 2% each, will contribute to the final course grade (i.e., a maximum of 20% can be earned through homework submissions). Generative AI Tools Cannot Be Used for This Assessment In this assessment, you are strictly prohibited from using generative artificial intelligence (AI) to generate any materials or content related to the assessment. This is because we want you to practise using formulas to solve problems in Excel, and we want you to be able to explain concepts and reasons for choices using your own words. The use of AI-generated content is not permitted and may be considered a breach of academic integrity. Please ensure that all work submitted is the result of your own human knowledge, skills, and efforts. Tutorial Participation Attendance at all ten tutorials is expected. Students should notify their tutor if unable to attend because of illness. Up to 10% (i.e., 1% per tutorial) will be awarded for participation in tutorials, which will be assessed by tutors. Mere attendance at tutorials is not considered participation. Discussion and working in groups is  expected every week. Term Test The term test is worth 35% of the course grade and will be 2 hours long. It will be held on Tuesday 29 April, 7:00-9:00 p.m. on campus. The test will cover material from lectures in weeks 1-6, up to and including the 28 March lecture. Final Examination The final exam is worth 35% of the course grade and will be 2 hours long. The date for the final exam will be set by the university after the completion of enrolment. Please consult the Course Information System to confirm the date, time and venue of the final exam. The exam will have questions on material covered in weeks 7-12 (from 1 April onwards). Please check that there are no clashes of test or exam dates and times for the set of subjects in which you are enrolled. If you have a clash, please notify the Course Coordinator as early as possible. Course grade The overall grade for the course is made up of a maximum contribution for each type of assessment as follows: Assessment Date Weight Learning outcomes assessed Homework Weekly by Monday 3 pm 20% 2, 3, 4, 5, 6, 7, 8, 9 Tutorial participation Weekly 10% 1, 2, 3, 4, 5, 6, 7, 8, 9 Term test Tuesday 29 April, 7 9 pm 35% 1, 2, 3, 4, 5, 9 Final exam to be announced 35% 5, 6, 7, 8, 9 You must gain an overall grade of at least 50% in order to pass the course. Also you must satisfy the '45%   rule'. That is, you must obtain a weighted average of not less than 45% in the invigilated component of the assessments in order to pass the course as a whoIe. (InvigiIated’means (formaIIy supervised under exam conditions’. In ACCT 222 the invigilated components comprise the Term Test and the Final Exam. Assessment In Te Reo Māori In recognising that Te Reo Māori is an officiaI Ianguage of New ZeaIand, the University provides for students  who may wish to use Te Reo Māori in their assessments. If you intend to submit your work in Te Reo Māori   you are required to read the Assessment in Te Reo Māori PoIicy and ensure that you meet the conditions set out in the policy. These include, but are not limited to, informing the Course Coordinator (1) no later than 10 working days after the commencement of the course that you wish to use Te Reo Māori and (2) at least 15 working days before each assessment due date that you wish to use Te Reo Māori. Quality Assurance For quality assurance purposes the School is required to hold on record a number of assessment pieces as examples of differing standards of work. If you have any objections to the school holding your assessment  for this purpose then email the course coordinator to ensure your assignment is not used for this purpose. Disruption Disclaimer While the above assessment weightings will apply in normal circumstances, in extraordinary situations (such as   but not limited to  earthquakes, snowstorms, lockdowns etc) we may make alterations to ensure appropriate assessment. Any changes will be communicated via Ako  | Learn and email messages. Tuhinga | Texts and Readings Eldenburg, L. G., Brooks, A., Vesty, G. and Pawsey, N. (2025). Management Accounting, 5th edition, John Wiley & Sons Australia, Ltd. This required textbook, referred to as (EIdenburg’throughout the remainder of this course outIine, is available as an interactive online textbook. You are able to purchase this online from the publisher click on the“Buy-to-own”tab. https://www.wileydirect.com.au/blog/buy/management-accounting-5th-edition/ (Note that the quoted price on the website is in Australian dollars.) Alternatively you can pay a subscription for access to the textbook during the semester subscription information is on the same site.  Note that the subscription is for one semester. If you plan to do ACCT332, you will have to pay the subscription again for the semester in which you do that course. If you prefer a printed copy of the textbook, you can purchase it from the University Bookshop. The printed copy comes with an access code so you can also download and access the eBook. Uiuinga | Consultations Students should discuss academic problems or queries with their tutor or lecturer in the first instance, during or after tutorials and lectures. Administrative matters should be discussed with the ACCT 222 Course Coordinator, who is responsible for the general conduct of the course. Other concerns (and accolades) may be communicated through the class representatives, who are selected at the beginning of the course.

$25.00 View

[SOLVED] FIT9132 Introduction to Databases Assignment 1 - Database DesignSQL

FIT9132 Introduction to Databases Assignment 1  - Database Design Ocean Odyssey Purpose Given the provided case study, students will be asked to transform. the information provided into a sound database design and implement it in Oracle. This task covers learning outcomes: 1.   Apply the theories of the relational database model. 2. Develop a sound relational database design. 3. Implement a relational database based on a sound database design. Your task This is an open-book, individual task.   The output for this task will be an initial conceptual model as a PDF document and a logical model implemented in the Oracle RDBMS Value 40 % of your total marks for the unit Due Date Wed, 30 April 2025 at 4:30 pm Submission ● Via Moodle Assignment Submission. ●    FIT GitLab check-ins will be used to assess the history of development Assessment Criteria ●    Using the supplied case study description prepare a conceptual model identifying the required entities, attributes and relationships. ●    Normalise the supplied case study forms/s and integrate the resultant relations into a logical model derived from the identified conceptual model. ●    Depict the data requirements expressed in the case study via a relational database logical model. ●    Generate a schema that meets the case study data requirements from the logical model produced ●    Consistent use of industry-standard notation and convention Late Penalties ●    5% of the marks available for the task (-5 marks) deduction per calendar day or part thereof for up to one week ●    Submissions over 7 calendar days after the due date will receive a mark of zero (0), and no assessment feedback will be provided. Support Resources See Moodle Assessment page Feedback Feedback will be provided on student work via: ●    general cohort performance ●    specific student feedback fifteen working days post-submission (approved by ADE) ● a sample solution Case Scenario Ocean Odyssey (OO) is a worldwide travel company. The company books passengers on ships that operate cruises departing from various ports worldwide. Each ship is operated by a particular company known as the operator. Each operator is assigned an operator ID as an identifier, and the company's name and the Chief Executive Officer's name are recorded. A given operator operates one or more ships. For each ship, Ocean Odyssey records a ship code to identify the ship, the ship's name, the date the ship was commissioned, its tonnage, its maximum guest capacity and the name of the country where the ship is registered. The cabins on a given ship are identified by a cabin number (such numbers may be reused across   ships, e.g. many ships may have a cabin D1).  Ocean Odyssey records a particular cabin's sleeping capacity and the cabin's class for a given ship (this class classifies the quality of the experience and services available). A cruise uses a particular ship (a cruise only uses one ship) and departs on a particular date and at a particular time.  A cruise ID identifies each such cruise. Ocean Odyssey records the name of the cruise and a brief description of the cruise. Passengers register with Ocean Odyssey when they make their first cruise booking. Each passenger is assigned a unique ID. The passenger's first and last name is recorded. Ocean Odyssey also records the passenger's gender and date of birth. If the passenger is a minor (i.e. under 18 years of   age), another registered passenger must be designated as a guardian. The guardian must be able to be identified by the system. This data is used during booking to ensure minors are accompanied by   their guardian. Each passenger's address is recorded as a street (including street number), town, postcode, and country. When the members of a particular family book on a cruise, they often all have the same  address. Ocean Odyssey maintains a manifest (list of booked passengers) for all cruises they manage. This manifest records the cabin that has been allocated for each passenger for each cruise (this allocation is carried out when the passenger is booked on the cruise). For each passenger taking part in a cruise, OO also records the date and time when they first boarded the ship. REMEMBER to keep up to date with the Moodle Ed Assignment 1 forum, where further clarifications may be posted (this forum is to be treated as your client). To view Assignment 1 only posts, select the Assignment and then the Assignment 1 forum from the Categories list in the left panel. Once selected, you can Filter the posts via the Filter option at the top of the list of posts: Please be careful to ensure you do not publicly post anything that includes your reasoning, logic, or any part of your work to this forum. Doing so violates Monash plagiarism/ collusion rules and carries significant academic penalties. Use private posts to raise questions that may reveal part of your reasoning or solution. You are free to make assumptions if needed; however, they must align with the details here and in the assignment forums and must be clearly documented (see the required submission files). Normally, such assumptions would only relate to minimum cardinality, which was not expressed in the case study. GIT STORAGE Your work for these tasks MUST be saved in the provided Assignment/Ass1 folder of your local repository and regularly pushed to the FIT GitLab server to build a clear history of the development of your model. TASKS to be Completed TASK 1 Ocean Odyssey Conceptual Model [15 Marks] Based on the case scenario on page 2 of this document, prepare a CONCEPTUAL model for Ocean Odyssey. In preparing this model, you must only use the description provided on page 2 of this document. Your model must be saved in a file named oo_conceptual.pdf Your development history, as pushed to Git Lab, must clearly show the steps you have been taught: ●    Step 1: entities and keys ●   Step 2: relationships, and ●   Step 3: non-key attributes The PDF file of your model must have at least three pushes (remember all pushes must be of a file with the same name - oo_conceptual.pdf). Please note that three pushes are a minimum; you   are free to make more (and we would expect more, in which case you will have more than one commit/push for each step). You must regularly check that your pushes have been successful by logging in to the FIT Git Lab server's web interface; you must not simply assume they are working. Do not forget to check that your GitLab author details are correct for every push. Before submission via Moodle, you must log in to the Git Lab server's web interface and ensure your final submission files are present. GIT automatically maintains a history of all files pushed to the server. You do not need to, and MUST not, add a version name to your various versions. Please ensure you use the same name (oo_conceptual.pdf) for all saved versions of your solution. The steps to complete this task: Using LucidChart, prepare a FULL conceptual model (Entity Relationship Diagram) using crow’s foot notation for Ocean Odyssey (OO)  as described above. ●    For this FULL conceptual model (ERD), include: ○    Identifiers (keys) for each entity ○   all required attributes and ○   all relationships. Cardinality (min and max) and connectivity for all relationships must be shown on the diagram. ● Surrogate keys must not be added to this model. Your model must conform. to the unit ERD standards listed in the “Conceptual Modelling” Applied lesson "Unit Entity Relationship Diagram Standards" on Ed. Your name must be shown on your diagram, and it must be exported as an A4 portrait page. TASK 2 Ocean Odyssey Normalisation [15 marks] The image below shows two sample cruise itineraries: Note that a cruise may "loop" around its origination port, i.e. depart from the origination port, return to the originating report and then depart again, all as part of the same cruise. Perform. normalisation to 3NF for the data depicted in the supplied sample documents (note there are two samples here; you only need to normalise one document/itinerary). This normalisation must be based only on the depicted form. content - you must not introduce attributes not shown on the document. The approach you must use is shown in the “Normalisation” Applied class solutions. You must begin by representing the document you are working on as a single UNF relation and then move through 1NF, 2NF, and 3NF. No marks will be awarded if you use a different approach. During normalisation, you must: ○ Do not add surrogate keys. ○ Include all attributes shown on the form (you must not remove any attribute as derivable) ○          Clearly show UNF, 1NF, 2NF and 3NF. ○          Clearly show all candidate keys for each relation in 1NF. ○ Identify the Primary Key in all relations by underlining the PK attribute/s. ○ Identify all dependencies at the various normalisation stages (Partial at 1NF, Transitive at 2NF and Full at 3NF). You should use the same notation as depicted in the normalisation    sample solutions, for example: attr1 -> attr2, attr3 If none exist, you must note this by stating: No partial dependencies present and/or No transitive dependencies present ○ Carry out attribute synthesis if required. The relation and attribute names used throughout your normalisation and those on your subsequent logical model must be the same. Your normalisation must be completed in an MS Word, Apple Pages, or Google document with a filename of oo_normalisation. If using MS Word or Pages, place the source document inside your local Assignment 1 Git Lab repo (Assignments/Ass1). The source document must be regularly saved and pushed to Git Lab as you develop your normalisation. If you are using a Google document, you must regularly download the normalisation as a file called oo_normalisation.pdf and push it to Git Lab. You must maintain the source Google document and make it available to your marker on request. Your normalisation must have at least three pushes (remember all pushes must be of a file with the same name - oo_normalisation ) to GitLab. The file extension for oo_normalisation will depend on which software you choose to use. Ensure that your name is shown on every page of the normalisation. TASK 3 Ocean Odyssey Logical Model [55 marks] Ocean Odyssey has supplied some further information to guide your modelling: ● The company records each passenger's contact phone number; for minors, no contact number will be recorded (the contact for their guardian will be used). The phone number should be recorded as a simple attribute. A new entity should not be created to hold the phone number. ● Cabins across the various ships are assigned a cabin class as one of the following: ○ Interior ○ Ocean view ○    Balcony, or ○    Suite These classes are fixed and will not be modified. 1.    Prepare a logical level design for the Ocean Odyssey database based on your Task 1 Conceptual model, the normalisations you carried out in Task 2 above and further details supplied here in Task 3. ●       The logical model must be drawn using the Oracle Data Modeler. Information engineering or Crow’s foot notation must be used to draw the model. Your logical model must not show data types. You must create a new empty folder in your local repo, in the Ass1 folder, called oo_model, and then place your model inside this folder, naming the saved model as oo_logical. ●       All relations depicted must be in 3NF. Candidate keys are possible natural keys; you must ensure your model protects all candidate keys to maintain the business rules. ●       You must add at least one surrogate key to your design (you are free to select the most appropriate relation to make this change in). You must explain why you added the surrogate key to your chosen relation as part of your assumptions. We have a unit     rule about requiring a surrogate key if the relation has a composite key with more than two attributes, but this is not the only reason you might add a surrogate. You may add surrogate keys to multiple relations if you wish. ●       All attributes must be commented in the database (i.e., the comments must be part of the table structure, not simply comments in the schema file). ●       Check clauses/look-up tables must be applied to attributes where appropriate. ●       You MUST include the legend in your model. Please edit the legend panel to show your name and ID number. ●       Please carefully check the slide "Overall Design Process - checklist" from the “Logical Modelling” Workshop and ensure you follow the steps listed. ●       Your GIT repository must indicate your development history with multiple commits/pushes as you work on your model. A minimum of six pushes is required for your logical model as it is developed to show this history. You are free to make more pushes/commits and are encouraged to do so. 2. Generate the database schema in Oracle Data Modeler and use the schema to create the database in your Oracle account. The only edit you are permitted to carry out to the generated schema file is to add header comment/s containing your details and the commands to spool/echo your run of the script (as illustrated in "Logical Modelling" Applied Stage 3 on Ed ). In generating your schema file, ensure you: ●       Capture the output of the run of your schema statements using the spool command. ●       Ensure your script. includes drop table statements at the start of the script. ●       Name the schema file as oo_schema.sql. Please note when working with your model, ensure that you NEVER select any export options from the Data Modeler menu: such actions can fill your Oracle account space and render it unusable. Tasks 1, 2 and 3 - Use of Modeling Standards/Meeting Submission Requirements and Git usage [15 marks] See the Marking Guide section of this document for further details. Use of Generative AI tools In this assessment, you can only use generative artificial intelligence (AI) to assist with design decisions. Any use of generative AI must be appropriately acknowledged (see Learn HQ) Requirements The following seven files are to be submitted and must also exist in your FITGit Lab server repo: ● A single-page pdf file containing your full final conceptual model.  Name the file oo_conceptual.pdf. This file must be created via File - Export (or Download As) - PDF from LucidChart (do not use screen capture) and must be able to be accessed with a development history via GIT. You can create this development history by downloading your PDFs (don't forget to use the same name, oo_conceptual.pdf - DO NOT use version 1, etc) and committing/pushing to GIT as you work on your model. In exporting from LucidChart, please select a page size of A4 with portrait mode. ●  A PDF document showing your full normalisation of the sample cruise itineraries showing all normal forms (UNF, 1NF, 2NF and 3NF). Name the file oo_normalisation.pdf ●  A single-page PDF file containing the final logical Model you created in Oracle Data Modeler. Name the file oo_logical.pdf. This pdf must be created via File - Data Modeler - Print Diagram - To PDF File from within Data Modeler, do not use screen capture. ●  A zip file containing your Oracle Data Modeler project (when zipping these files, be sure to include the .dmd file and the folder of the same name). Name the zip file oo_model.zip. Part of the assessment of your submission will involve your marker extracting your model from this zip, opening it in Data Modeller, and engineering to a new Relational model. From this, your marker will generate a schema, which will then be compared with your submitted schema (they must be the same for your schema to be accepted). For this reason, your model must be able to be opened by your marker and contain your complete model (i.e. both your logical and relational models); otherwise, your submission will not be able to be fully marked, resulting in a significant loss of marks. You MUST carefully check that your model is complete - ensure you take your submission archive, copy it to a new temporary folder, extract your submission parts, extract your model and ensure it opens correctly before submission. Please view the video on Ed under the lesson "A6 Oracle Data Modeler Support   Videos", which demonstrates this process. ●   A schema file (CREATE TABLE statements) generated by Oracle Data Modeler. Name the file oo_schema.sql ●   The output from the Oracle spool command showing the tables have been created. Name the file oo_schema_output.txt ● A PDF document containing any assumptions you wish to make your marker aware of. Name the file oo_assumptions.pdf. If you have made no assumptions, submit the document with a single statement saying, "No assumptions made". Your assignment MUST show a status of "Submitted for grading" before it will be marked. If your submission shows a status of "Draft (not submitted)", it will not be assessed and will incur late penalties after the due date/time.

$25.00 View

[SOLVED] GEOG0178 Machine Learning for Social Sciences with

GEOG0178 Machine Learning for Social Sciences with Python COURSEWORK INSTRUCTIONS COURSEWORK: Machine learning-data science project Deadline – noon 28 April, 2025 The objective: The objective of this coursework is to analyse a dataset of your choice in order to provide a data-based answer to a research question. Thus, naturally, your first tasks include figuring out a research question that interests you and finding a dataset that can help you answer it by applying machine learning modelling to it. Big data producers such as the UN, the World Bank, the OECD, the Eurostat may be of interest in terms of data search. But you are welcome to explore further. Any topic from social sciences can be analysed. The final output of this assessment is a coherent report (supported by a jupyter notebook that contains the python code for the analysis). Report structure: Students should submit the report (in a .pdf format) through Turnitin on the course Moodle page, under the “Assessment” tab. The report should be coherent and structured as follows: 1. Introduction ·  Background, context and research question 2. Brief literature-based overview of the topic and the research question ·  Cite a minimum of 10 academic and/or policy papers 3. Data and method · Variables · Why the selected machine learning model(s) is appropriate for the selected dataset/question? Main model premises. · Discuss data cleaning/wrangling (if applicable) 4. Interpretation and discussion of machine learning modelling results · Exploratory data analysis (EDA) · Model results and performance · Comparison of models*** · Limitations and implications 5. Conclusion · Summary of the main findings · Implications of the findings · How could the analysis/model be improved? · Suggestions for further research within the topic ***Important note: students are required to run at least two machine learning models in their analysis and compare their performance. Note that the two (or more) models do not necessarily need to be different machine learning techniques. They can be two or more model variations conducted with the same machine learning technique (e.g., random forest with 8 variables as the 1st model and random forest with 5 variables as the 2nd model). Submission format: The report should start with the UCL Geography cover page you can download from the link inserted in the box above. In a PDF document with text of font size 11 or 12 and written fully in complete sentences, e.g. not using bullet points and not including any Python code in the report. The report’s maximum length is 2,000 words (+/- 15% rule DOES NOT apply) which you are free to divide in any way between the sections and subsections. Please respect the maximum word count of 2,000 words. The word count includes headings and subheadings, main text but excludes the coursework cover page, title, captions of figures or tables, bibliography (list of references) and appendices (if there are some) at the end of the document. The maximum number of figures is 10 in total (multiple sub-figures used to make the same point are allowed) and the relevance of these figures should be explained in your write-up. The code developed by the student should be submitted using a separate submission link available on the course Moodle page in a single. ZIP (compressed) file. The code can be submitted as a Jupyter notebook(s), i.e. .ipynb file but it must be contained within one ZIP file. Please add your data file(s) to the ZIP file.

$25.00 View

[SOLVED] Stat 4194 Applied Sports Statistics Spring 2025

Stat 4194: Applied Sports Statistics Spring 2025 (29679) Course Overview Course Description Applied statistics topics in a sports context, including regression, categorical analysis, fixed and random effect models, machine learning and predictive models, time series, supervised and unsupervised clustering. Course Learning Outcomes By the end of this course, students should successfully be able to: • Propose and test measurable, relevant research questions and statistical hypotheses in sports contexts • Apply statistics and data science methods in sports contexts • Read and critique published academic work on research related to statistics and data science in sports contexts • Create professional, rigorous visualizations and reports on statistical analyses Materials Text Required text: Stat 4194 Course Notes (electronic, on Carmen) Recommended text resources: • James, Witten, Hastie, Tibshirani:  An Introduction to Statistical Learning with Applications in R, 2nd edition.  (https://statlearning.com/) • Applied Linear Regression Models, 4th edition, by Kutner, Nachtsheim, and Neter, 2004 Software • Required software:  we will extensively use the statistical software package called R (The R Project for Statistical Computing; http://www.r-project.org/).  This software package is available for free.  You can download R for Windows, Mac, and Linux, from the CRAN archive at https://cran.r-project.org.  An in- depth introduction to R is available at http://cran.r-project.org/doc/manuals/R-intro.pdf .  Tutorials are available in the Swirl system, which you can learn about at http://swirlstats.com/.  “R Programming: The basics of programming in R” is an appropriate first tutorial for students who have never used R. • Required software: we will also use the R interface RStudio.  This package is available for Windows, Mac, and Linux and can be downloaded for free from http://rstudio.org.  Note that RStudio requires R to be installed. • Required software: Microsoft Office 365 ProPlus. All Ohio State students are now eligible for free Microsoft Office 365 ProPlus through Microsoft’s Student Advantage program. Each student can install Office on five PCs or Macs, five tablets (Windows, iPad® and Android™ ) and five phones.  Students are able to access Word, Excel, PowerPoint, Outlook and other programs, depending on platform.  Users will also receive 1 TB of OneDrive for Business storage.  Office 365 is installed within your BuckeyeMail account.  Full instructions for downloading and installation can be found https://ocio.osu.edu/kb04733. Grades and Assignments Homework Homework assignments will comprise 20% of your grade for the course.  Assignments will only be accepted through Carmen as a  .pdf file submission, with clear and organized work and relevant code and output provided. Every assigned problem should be completed, but only a subset of problems may be graded. You are encouraged to collaborate on homework assignments, but ultimately the work you submit must be your own. All homework assignments will be included in your final grade. Quizzes Quizzes will comprise 10% of your grade for the course.  These short assignments will frequently be given during lectures, and may be unannounced.  The lowest quiz grade will be dropped.  Quizzes missed due to an excused absence must be made up during office hours within one business day of the missed class. Projects The course will include three projects, each comprising 20% of your grade.  They will roughly be due in Weeks 5, 11, and during final exam week. Attendance Attendance will be taken daily and will comprise 10% of your course grade.  All students will get one free absence with no questions asked.  Additional absences will be considered on a case by case basis.  You can strengthen your case by reaching out farther in advance, attending office hours to discuss the missed material, and providing documentation such as a doctor’s note. Two tardies will be counted as one absence. Late Assignments Late homeworks and projects will be accepted for 48 hours after the original due date with a 2% deduction per hour. After this, no late assignments will be accepted.  Do not wait until the last moment to begin working on assignments. Unexpected obstacles will occur in life - it is your responsibility to be prepared for them. If something unexpected comes up 2 hours before an assignment is due that impedes your ability to submit on time, then you should have started the assignment earlier.  Submitting the wrong document - such as the blank assignment template, an incomplete version, or a corrupted version of the file - is not a valid excuse. It is your responsibility to ensure you have submitted the proper document in the proper format. For emergencies, each student can have one late waiver throughout the semester, no questions asked.  You still have to turn in the assignment within 48 hours, but I’ll waive the late penalty for one assignment of your choosing if something unexpected comes up and submit it within 48 hours of the due date.  

$25.00 View

[SOLVED] Math 220 Final exam

Math 220 Final exam Recall that the set of irrational numbers is R Q. 1. Are the statements (a) to (j) below True or False? Write T or F in the box. (a) ∅ ∈ {N, {∅}} and N ∈ P(N). (b) P(N) ∪ P(Z) = P(Z). (c) (R × Q) ∩ (Q × R) = Q × Q. (d) (e) {x ∈ R, s.t. (∃n ∈ Z s.t. x = (n+1)(n+2)(n+3))} ⊆ {x ∈ Z, s.t.  (∃k ∈ Z s.t. x = 3k))}. (f) {x ∈ Z, s.t. (∃k ∈ Z s.t. x = 3k)} ⊆ {x ∈ R, s.t. (∃n ∈ Z s.t. x = (n+ 1)(n+ 2)(n+ 3))} (g) There exists an injective function R → P(R). (h) The product of two irrational numbers is irrational. (i) For a function f : A → B, recall the definition of Range(f). (j) Given a function f : A → B and Y ⊂ B, recall the definition of f−1 (Y ). (k) Give the Euclidean division of −142 by 9. 2. Prove or disprove the following statements: (a) (b) There exists x ∈ Z such that x2 ≡ 142 mod 9. (c) There exists x ∈ Z such that x 2 ≡ −142 mod 9. (d) For any function f : Z → Z and any X ⊆ Z we have f−1 (f(X)) ⊆ X. 3. This question is on two pages. Parts (a), (b) and (c) are related. You may admit the result in (b) to prove (c). We admit that for any real number x ∈ R, there exists a unique integer c(x) ∈ Z such that x ∈ (c(x) − 1, c(x)] . (a) Write the required values in the boxes: c(0.2)=                  c(-5.3)= (b) Prove, for x ∈ R: x > 0 =⇒ c(x) ≥ 1. (c) Prove 4. Find (with proof) a ∈ {0, . . . , 6} such that 6354235a ≡ 1 mod 7. 5. In this question, Parts (a) and (b) are connected. You may admit the result of Part (a) to prove Part (b). (a) Prove that (b) Prove that 6. The Fibonacci numbers are defined by the recurrence F1 = 1      F2 = 1      and      Fn = Fn−1 + Fn−2      for      n > 2. Show that for every k ∈ N: F4k is a multiple of 3. 7. This question has several parts from page 9 to page 12. We consider the function (a) Compute f−1 ({1}) and f−1 ({6}). (b) Is f injective? Justify your answer. (c) Is f surjective? Justify your answer. (d) Let (d)-1) Show that Range(g) = R {1}. (d)-2) Show that g([0, 4)) = [ 4/9 , +∞). (d)-3) Compute f(] − 2, 0]) where f is the function defined at the beginning of Question 7. Hint: there is a link between functions f, g and h. 8. This question has 3 parts on two pages. Consider a function f : A → B between sets A and B. Define the function (a) When f is the function describe the function F explicitly by giving all its values. (b) With the same example of function f as in part (b), is F injective? Justify your answer. (c) Now f is general again like at the beginning of Question 8, before Part (a). Prove: if f is injective, then F is injective. 9. For each of the following relations: • If it is not an an equivalence relation, explain why. • If it is an equivalence relation, no need to prove it but give (without proof) a description or list of the equivalence classes. a) On the set {1, . . . 20}, the relation R given by: xRy when (2|(x − y) or 5|(x − y)). b) On the set F of functions R → R, the relation S given by: fSg when there is x ∈ R such that f(x) = g(x). c) On the set R × R, the relation given by: (x, y)T(x 0 , y0 ) when y − 3x = y 0 − 3x 0 . d) On the set F of functions R → R, the relation U given by: fUg when there is x ∈ R such that |f(x) − 1| = |g(x) + 1|. e) Let X be a non empty set. On P(X), the relation V given by: AVB when A ⊆ B.

$25.00 View

[SOLVED] CS218 Spring 2025 Assignment 1

CS218, Spring 2025 Assignment #1 Due: 11:59pm, Friday, 4/11 Deadline. The homework is due 11:59pm, Friday, 4/11. You must submit your solutions (in pdf format generated by LaTeX) via GradeScope. The training programming assignment is due earlier, see more details below. Late Policy. You have up to four grace days (calendar day) for the entire quarter. You don’t lose any point by using grace days. If you decide to use grace days, please specify how many grace days you would like to use and the reason at the beginning of your submission. Collaboration Policy. You can discuss the homework solutions with your classmates. You can get help from the instructor, but only after you have thought about the problems on your own. It is OK to get inspiration (but not solutions) from books or online resources, again after you have carefully thought about the problems on your own. However, you cannot copy anything from other source. You cannot share your solution/code with anyone else. You cannot read other’s solution/code. If you use any reference or webpage, or discussed with anyone, you must cite it fully and completely (e.g., I used case 2 in the examples in the Wikipedia page https: // en. wikipedia. org/ wiki/ Master_ theorem_ ( analysis_ of_ algorithms) about Master theorem for Problem 3, or I discussed problem 2 with Alice and Bob.). If you use ChatGPT or similar AI tools, you have to attach the full conversation to make it clear what help you obtained from them. Otherwise it will be considered cheating. We reserve the right to deduct points for using such material beyond reason. You must write up your solution independently, and type your answers word by word on your own: close the book/notes/online resources when you are writing your final answers. Write-up. Please use LaTeX to prepare your solutions. For all problems, please explain how you get the answer instead of directly giving the final answer, except for some special cases (will be specified in the problem). For all algorithm design problems, please describe using natural language. You could present pseudocode if you think that helps to illustrate your idea. Please do not only give some code without explanation. In grading, we will reward not only correctness but also clarity and simplicity. To avoid losing points for hard-to-understand solutions, you should make sure your solutions are either self-explanatory or contains good explanations. See some more details here https: //www.cs.ucr.edu/~ygu/teaching/218/S25//assignments/index.html Programming Problems. You will need to submit your code on CodeForces (a readme file about submitting code is available on the course webpage). You also need to submit a short report through GradeScope along with your solutions of the written assignments. In the report, you need to specify your submission id, describe the algorithm you designed, and show cost analysis if necessary. Note that your code will be automatically saved by CodeForces, so you do not need to submit the code again. For each problem, there will be 10-20 test cases. For the training programming problems, you need to finish before 11:59pm, Friday, 4/04. With reasonable implementation, using C++ or Java is guaranteed to be able to pass all tests. You can use other languages, but it’s not guaranteed that the implementations can be within the time limit. 1 A Complex Complexity Problem (1.6pts) Yihan recently learned asymptotical analysis. The key idea is to evaluate the growth of a function. For example, she now knows that n 2 grows faster than n, but for more complicated functions, she feels confused. Can you help Yihan compare the functions below? We use log n to denote log2 n. n! = 1 × 2 × · · · × n. Explain each of your answers briefly. For the following questions, use o, Θ or ω to fill in the blanks. Questions: 2 Solve Recurrences (0.9pts) For all of them, you can assume the base case is when n is a constant, T(n) is also a constant. Use Θ(·) to present your answer. Questions: 3 Test the candies (2.5 pts + 1 bonus pt) You got job at a candy factory. It’s not always the case that all the candies produced are perfect. There will be some bad ones. Your task is to identify these bad candies from n candies and discard them. However, you can not tell which ones are bad directly: they look exactly the same. The only thing you know is that, a bad candy has lighter weight than standard (good) candies. More precisely, all the good (standard) candies have the same weight wg, and all the bad candies have the same weight wb < wg. The only device you have is a balance scale, and you don’t have any weights (so you cannot really know the weight of each candy). As a result, the only thing you can do is to put some candies on the left and some on the right, and the balance will tell you if the left one is heavier, the right one is heavier, or they balance. Every time you use the balance, you have to pay 1 dollar. All the other cost is free. Your task is to find all bad candies using the lowest cost. Questions: For all questions below, please describe your algorithm and briefly explain the cost. 1. (0.2pts) Your boss told you that there is only one bad candy. In that case, can you show an algorithm that uses ⌈log2 n⌉ (note: this is not in big-O!) dollars to find this bad candy? 2. (0.2pts) Your boss told you that there is only one bad candy. Now let’s improve the previous cost by a little bit. Can you show an algorithm that uses ⌈log3 n⌉ (again, not in big-O!) dollars to find this bad candy? 3. (0.7pts) Prove that ⌈log3 n⌉ dollars is the lower-bound of the candy-testing problem in 3.2. In other words, you cannot use fewer than ⌈log3 n⌉ dollars to guarantee to find the bad candy. (Note: you are asked to show that any possible solution needs ⌈log3 n⌉ dollars, instead of a specific strategy.) 4. (0.3pts) Your boss told you that there are only two bad candies. Can you show an algorithm that uses O(log n) dollars to find the two bad candies? 5. (0.3pts) If you already know that there are only k bad candies, where k is a known constant, can you show an algorithm that uses O(log n) dollars to find the k bad candies? You could assume that k is a known value and it could appear in your algorithm. 6. (0.8pts) In all above questions, we assume you know the bad candy is lighter than standard. A more difficult cases is where you only know that the bad candy is of a different weight, but you do not know if it is lighter or heavier. Again assume there is only one bad candy. Prove that, in this case, you need at least ⌈log3 2n⌉ dollars to find the bad candy, and also tell whether it is lighter or heavier. (Hint: again, a common incorrect answer is that you design an algorithm that uses ⌈log3 2n⌉ dollars (and say this is optimal). This is not what the question asks. It asks you to show that any possible solution needs ⌈log3 2n⌉ dollars.) 7. (bonus, 1pt) Now let’s consider the challenging setting where there is only one bad candy, but you don’t know if the bad candy is lighter or heavier. Luckily, your boss also gave you one good candy as a reference (you’ll find this useful). Now you have n = 13 candies, and one of them is bad (either lighter or heavier). Plugging this into the lower bound above gives ⌈log3 (2 × 13)⌉ = 3 dollars. Now, show a solution for n = 13 for this case using 3 dollars. Hint Maybe divide-and-conquer is a good idea. Bonus Problems 4 Finding the Minimum Value (1pt) This programming problem can be found on codeforces. 5 Multiple Medians in Linear Time (1pt) In order to get the 1 point from this problem, you need to design and implement the algorithm that can pass all test cases on codeforces. Then, you must explain your algorithm and prove the O(n) cost (deterministic worst-case or randomized expected). 6 Share candies (1pt) This programming problem can be found on codeforces. 7 Being Unique (2 pts) You are given an array A of n numbers from the set {1, 2, . . . , n}. The array has the “unique-in-range” property if for every range [i, j] there exists an element A[k] (where i ≤ k ≤ j) such that the number A[k] occurs just once in that range. For example, the sequence: 1 2 1 3 1 2 1 4 1 2 1 3 1 has the unique-in-range property. As does 1 2 3 4 5 6. But the sequence 1 2 3 1 2 3 does not (it fails on the whole range: every number appears twice). Give an algorithm to determine if a given array has the property. It should have runtime O(n log n) or better. You must prove your answer.

$25.00 View

[SOLVED] CS 1100 Computer Science and Its Applications Topic 10 Pivot Tables

CS 1100 – Computer Science and Its Applications Topic 10: Pivot Tables How to Get Started To get started, download the starter file ( .xslx). What to Turn in You must submit your solution to Canvas by the due date. When you finish the assignment, save the file and upload it to Canvas. The file will be named LastName-FirstName .pivotTables where LastName is your last name and FirstName is your full first name. Knowledge Needed This assignment involves the following Excel functions and techniques: • Pivot Tables, inserting fields, filters, and slicers. •  Functional programming with dynamic arrays:  LAMBDA, LET, SORT, UNIQUE, FILTER, SUM- IFS, AVERAGEIFS, all LAMBDA-helper functions we covered in earlier assignments/labs, text processing functions such as TEXTBEFORE and TEXTAFTER, stacking functions  (HSTACK, VSTACK), array accessing functions such as TAKE, DROP etc. Keepinmindthathowyouexpresstheformulasmattersandispartofthegrading.It’snotjustgetting therightanswerthatcounts;howyouwritethedynamicformulamatterstoo. Helper Array Debugging (HAD) We use the Helper Array Debugging technique for LAMBDA Excel formulas with LET in assignments and exams. Checking your LAMBDAs with LET carefully with different inputs helps you understand the ChatGPT-generated formulas.  It improves your debugging skills because it allows you to localize errors in the relevant subformulas. HAD: Brief Instructions for debugging formulas, specifically LAMBDAs with LET, using helper arrays. •  Show the LAMBDA using FORMULATEXT. You need it to reproduce the formulas in the helper arrays. •  The LET introduces names: each name is assigned a helper array. Put each name as the header of the helper array. If needed, introduce a final helper array for the final result. •  Put the formulas from the LET into the appropriate helper array using inputs and hashtag references to earlier arrays. You can use additional helper arrays if you need to see the output of subformulas. •  Check each step:  does it produce the correct output for changing inputs?  Note that using helper arrays also produces a dynamic array solution and lets you inspect the output of each step indepen- dently of future steps. •  Using ChatGPT to create the debugging formulas directly in the LAMBDA. It is recommended to use a LAMBDA to show the value or the error code as text (using the VALUETOTEXT function). We call the function echeck. LAMBDA(value,IFERROR(value,  "ERROR   "  &  VALUETOTEXT(value,1))) This function will help shorten the debugging code because we encapsulate the error handling for one error in a function with a short name (echeck).  The purpose of echeck (an abbreviation for error check) is to return a value or to show the Excel 365 error string if the value cannot be computed.  The show part of the debugging code does not need to know the details of the echeck implementation. Indeed, other implementations of echeck are possible, and the show section of the debugging should be independent of those details.  Otherwise, we would have a violation of the Principle of Least Knowledge, aka the Law of Demeter. The function below is a bit synthetic and abstract, but it illustrates the importance of catching all possible errors. For example, if you delete the line "d",echeck(d), you will get NO debugging information! Therefore, it is essential that each variable called value in the LET has an echeck line in the debugging formula area.  Remember that LAMBDAs defined in LET don’t need such an echeck line because the LAMBDA is guaranteed to cause an error.  After all, the arguments are missing. =LAMBDA(x,y,  LET( a,SEQUENCE(x), b,1/0+a+y, c,SEQUENCE(x)=0, d,  FILTER(a,SEQUENCE(x)=0), e,  1/0, COMMENT1,  "Debugging  formulas  follow", echeck,  LAMBDA(value,IFERROR(value,  "ERROR  "  &  VALUETOTEXT(value,1)  )), show,IFERROR(HSTACK( "a",echeck(a), "b",echeck(b), "c",echeck(c), "d",echeck(d), "e",echeck(e)  ),""), show))(2,2) LAMBDAs in this assignment and ChatGPT Use the Design Recipe to Guide the development of your LAMBDAs. In the testing part (section 6), use HAD to debug your formulas.  Use function echeck.  Remember that we shared a document containing ChatGPT instructions in English for generating the debugging code.  It would help if you used it to avoid the repetitive work of producing the debugging boilerplate. Make sure to fact-check the output. Required Setup Transform the data in the sheet Main  Data  Source to an Excel table.  Name the table Problems1to5 and use it to create pivot tables in Problems 1 to 4. Problem 1 Create a Pivot Table (22 points) 1.  Create a pivot table using the data in the worksheet Main  Data  Source.  Choose the option to put the Pivot Table in a new worksheet. 2.  Label the new worksheet Problem  1. 3.  Show the average REVENUE per MARKET split by LINE  OF  BUSINESS. 4.  Separate the values by REGION. 5.  Format REVENUE as Currency. 6.  Show a subtotal for each MARKET. 7.  Create a report using Tabular Form.  You find this in  Design Tab (under PivotTable Tools > Report Layout > Tabular Form. Your pivot table should look like Figure 1. Problem 2    Create a Pivot Table and Pivot Bar Chart and simulate it with Dynamic Array Formulas (30 points) 1.  Create a pivot table and pivot bar chart to show the total Revenue by Region and then by Market using the data in the worksheet Main  Data  Source. Use the Tabular Form. 2.  Label the new worksheet Problem  2. 3.  Filter the chart so it shows only the South and sort the result. Don’t show any subtotals or grand totals. Note:  Excel for Mac may produce Pivot Charts that look different from what is shown here. 4. Your pivot chart should look like Figure 3. You have latitude in your chart’s look for color, fonts, and the like. 5. Write a function called SimplePivotTable to simulate the Pivot table below.  A big advantage of using a function is that you don’t have to press the Refresh button when the data changes. Follow the Design Recipe to construct the LAMBDA including using HAD in the debugging phase.  To define SimplePivotTable we use generic argument names or terminology that works for other Pivot tables you may want to generate.  The problem formulation is  "Show total X by Y  (filtered for specific Y__1) by Z using the data in the Main Data Source".   Here  are three examples  (in the context of Main Data Source) how the function SimplePivotTable could be used. Show  total  X             by  Y  (filtered  for  specific  value  Y_1)                  by  Z Show  total  Revenue by  Region  (filtered  for  specific  Region_1) by  Line  of  Business Show  total  Revenue by  Region  (filtered  for  specific  Region_1) by Market Show  total  Revenue by Market  (filtered  for  specific Market_1) by  Line  of  Business Figure 1: The Pivot Table in Tabular Form for Problem 1 Note that we need four formal arguments CNumber,CName1,CName2,SpecificCName1 correspond- ing to this line Show  total  CNumber       by  CName1 .       (filtered  for  SpecificCName1)      by  CName2 The signature for function SimplePivotTable is SimplePivotTable(Cnumber,CName1,CName2,SpecificCName1) it returns a 3-column-table where Cnumber is a column of numbers (e.g., REVENUE numbers) that will be summed, CName1 is a column of names  (e.g.,  Region names),  SpecificCName1  is a specific value in CName1  (e.g. South), and CName2 is a column of names (e.g., Market names). The columns CNumber, CName1 and CName2 are columns from the main data source.  They all have the same number of rows. The CName1 and the CName2 are descriptor values or categories.  The function produces a table with 3 columns where the first column contains the value for SpecificCName1.   The second col- umn is a sorted and unique filtered list for CName2 that have the corresponding SpecificName1 (e.g., South).  Make sure you combine the corresponding rows for the unique CName2 value.  The third column contains the corresponding aggregated Cnumber values for the SpecificCName1 and CName2 value. It is ok to add the headers of the three columns manually instead of producing them programmatically. You will need the following functions to implement the LAMBDA for SimplePivotTable:  LET, SORT, UNIQUE, FILTER, HSTACK, =, and SUMIFS. Figure 2: The Pivot Table for Problem 2 Figure 3: The Pivot Chart for Problem 2 Hints: • Send the following request to ChatGPT to get started: This question is about simulating a simple Excel pivot table with Excel dynamic array formulas. Given is a table MDS  (main  data source) of data with columns Revenue,  Region  (with  entries “North”, “South” and “West”), and Market.  Create a pivot table to show the total revenue split by Region and then by Market using the MDS table.  Use the tabular form.  Filter the table to show only a specific region called “South”.  Don’t show any subtotals or grand totals. Therefore the table shows the total revenue for each market for the “South” region. Give dynamic array formulas to produce the same output as the pivot table. Package the formulas into a LAMBDA(Revenue, Region, Market, SpecificRegion,...).  Test your LAMBDA formula by using columns of the MDS table as actual arguments and “South”  as the SpecificRegion.   Use a SUMIFS formula with five arguments as part of the solution. Sort the markets in the South region alphabetically. • When I used ChatGPT on 3/21/2023, it created the following formula for SimplePivotTable: LAMBDA(Revenue,  Region, Market,  SpecificRegion, LET( FilteredMarket,  FILTER(Market,  Region=SpecificRegion), UniqueFilteredMarket,  SORT(UNIQUE(FilteredMarket)), TotalRevenue,  SUMIFS(Revenue, Region,  SpecificRegion, Market, UniqueFilteredMarket), CHOOSE({1,2}, UniqueFilteredMarket,  TotalRevenue) ) ) Test the LAMBDA carefully and change to the terminology described in the assignment.  Instead of the CHOOSE function, use HSTACK. • Turn in the LAMBDA for the function SimplePivotTable you generated. Figure 4: The Pivot Chart for Problem 3 Problem 3 Creating Calculated Fields (20 points) 1.  Create a pivot table to show the total Revenue by Region and then by Market using the data in the worksheet Main  Data  Source.  Show the results only for the North region. 2.  Label the new worksheet Problem  3. 3.  Create a Calculated Field Bonus that calculates a bonus for each Market. Bonus is 1% of Revenue above $40,000,000, and 0.5% for Revenue at or below $40,000,000. 4.  Create a second Calculated Field Total that sums Revenue and Bonus. 5.  Do not show Subtotals. 6.  Format the Revenue, Bonus and Total as Currency. 7.  Using the Pivot Table Styles, pick the one that matches what is shown in Figure 4. 8.  Call function SimplePivotTable to simulate the Pivot table.  Write one dynamic array formula to calculate the bonus and a second one to compute the total.  Use function FORMULATEXT to show your call to SimplePivotTable and for the formulas that you use for the calculated fields. Problem 4 Using Slicers (23 points) 1.  Create a pivot table to show the total Revenue by Region and then by Line  of  Business  us- ing the data in the worksheet Main  Data  Source.  Use function SimplePivotTable to produce the same data only for the North region and all lines of business in the North region.  Use function FORMULATEXT to show your call to SimplePivotTable. 2.  Label the new worksheet Problem  4. 3.  Add a Count of Revenue to show how many times each product was sold. 4.  Insert two Slicers: one for Region and one for Line  of  Business. 5.  Filter Line  of  Business so only values for Copiers and Printers are visible. Your sheet should look like Figure 5. Figure 5: The Pivot Table for Problem 4 Problem 5 BONUS: Improving Text Processing with REDUCE (10 points) We go back to assignment T6, problem 1, on text processing. When solving T6, problem 1, we did a lot of manual work to repeatedly call TEXTBEFORE and TEXTAFTER to translate text and delimiters into fields for name, downloads(IOS or Android), company, and the current user base.  We did careful manual bookkeeping to ensure TEXTBEFORE and TEXTAFTER produced the desired output.  Now we have learned about automating tedious repetition using REDUCE, which we apply now to solving T6, problem one automatically. You write a function PARSE that takes two inputs, a column of text and a list of delimiters and it produces as output a table of fields.  We give PARSE in incomplete form, with a lot of holes that you need to fill in, based on the knowledge you learned in the course. An essential principle of Excel programming and any programming activity is DRY (Don’t Repeat Your- self). We violated this principle significantly by writing numerous pairs of calls TEXTBEFORE(text,delim,. . .),  TEXTAFTER(text,delim,. . .) . This is a tedious and error-prone task.  Instead, we should have written ONE pair of calls TEXTBEFORE, TEXTAFTER inside a REDUCE function call, and then the REDUCE function would repeat the pair call until the text is processed. Write a function PARSE which takes as input a text and a range of k delimiters, and it outputs a table of k+1 columns, each one representing one component of the text.  For example, the text is “a;b::c,d”, k=3, the delimiters are “;”,”::”,”,”. And as output we get the 4 column table “a”,”b”,”c”,”d”. Instead of writing =TRIM(TEXTBEFORE(M9#,N8)) =TEXTAFTERCSA(M9#,N8) many times, we write ONCE first,TRIM(TEXTBEFORE(new_acc,val)), rest,TEXTAFTER(new_acc,val), inside a REDUCE call.  REDUCE will then automatically execute TEXTBEFORE and TEXTAFTER the correct number of times and handle all the bookkeeping. Write a function PARSE which has two arguments.   The first is text to be split into its components and the second defines the rules how the text is to be split.  Those rules are called a simple grammar in computer science.  Our grammars are very simple:  Text separated by delimiters and we specify them by giving the list of delimiters. PARSE has the signature PARSE(text,delims)  ->  Table  with  COLUMNS(delims)+1  columns . Each column corresponds to a component extracted from the text based on the delims. The formula has “holes” that you need to fill in.  The holes are numbered UNKNOWN1, UNKNOWN2, . . .  ,UNKNOWN10. Turn in a table with two columns and assign to each UNKNOWN a function name so that the formula works correctly according to specification.  Test your chosen UNKNOWNs by using them in your formula. Each UNKNOWN is worth 1 point. Your answer will look like this: UNKNOWN1 IF UNKNOWN2 SUMIFS UNKNOWN3 AND etc . =LAMBDA(text,delims, UNKNOWN1(decorated_first_cells,REDUCE(text,delims, UNKNOWN2(acc,val, UNKNOWN3( COMMENT1,"initialize  acc with  input  string  and  define  delimiters  as  a  range", COMMENT12,"acc  is  a  list  of  text  cells", COMMENT2,"acc  is  a  row  and we need  the  last  column  which  is  the  rest  of  the  string  to parse", new_acc,TAKE(acc,1,-1), first,TRIM(UNKNOWN4(new_acc,val)), rest,UNKNOWN5(new_acc,val), first_rest_pair,UNKNOWN6(acc,val,  first,  rest), first_rest_pair) )), COMMENT3,"now  comes  some  cleanup work", number_of_delims,UNKNOWN7(delims), COMMENT,"every  3rd  entry  contains  a  component", seq,UNKNOWN8(1,number_of_delims*3), logical,UNKNOWN9(seq,3)=0, drop_last,DROP(decorated_first_cells,,-1),   cleaned_first,UNKNOWN10(drop_last,logical), completed_first,HSTACK(cleaned_first,TAKE(decorated_first_cells,,-1)), inter,IFERROR(VSTACK(decorated_first_cells, number_of_delims, seq,  logical,drop_last,cleaned_first,  completed_first),""), inter))("a;b::c",{";","::"}) The output of this formula is: a;b::c  ;  a b::c  :: b  c 2 1  2  3  4  5  6 FALSE  FALSE  TRUE  FALSE  FALSE  TRUE a;b::c  ;  a b::c  :: b a b a b  c Topic 10: Pivot Tables Problem 6 Grade Computation (5 points) As you know, the grading rules of CS1100/CS1101 cannot be expressed in Canvas.  Therefore, we provide you with an Excel spreadsheet to compute your grade. It should be a straightforward exercise for you to write the spreadsheet yourself.  We focus only on one student and use two formulas:  one for computing the exam grade and one for the non-exam grade.  You have already calculated the exam grade in an earlier assignment. However, the formula given here is improved: The information about how to provide weights for the different components of the course is now localized into one array constant and not spread across a formula.  The dynamic array function SumProduct is used to compute the grade.  Should the weights change, we only need to update the array constant, not a formula at several places.  This is a good separation of concerns and follows the Principle of Least Knowledge. Use the file gradeCS1100_one_student .xlsx next to the assignment instructions file to compute your grade for three scenarios:  best,  realistic, and minimal. In the best scenario, you give yourself 100 for each grade you have not received, e.g., for make- up 1 and 2.  Apply the formulas to three rows for the three scenarios using any technique you like, e.g., dragging.  Implement the lookup using XLOOKUP to map percentages (exam percentage + non-exam percentage) to letter grades. Download your grades from Canvas and feet them to the grading calculator. For M__1 and M__2 give as input 0 if you plan not to take the makeup exams.  The formulas spill so you can feed multiple rows to investigate different scenarios or to also compute the grade for a classmate. The names used for the formal arguments should be self-explanatory: TH=Take-Home Exam, E__1=Exam 1, M__1=Makeup Exam 1, etc. The above grade calculator also answers this question: •  Q1 - My take home exam grade was not replaced by exam1 grade in canvas. • A1 - The change will not be reflected in Canvas.  It is considered in the formula when calculating the final grade. Here are more detailed instructions: •  Canvas Step 1: Open CS1101 from the course tab. Step 2: Click through the Grades tab. Step 3: Manually extract the scores (Quizzes, Project, Attendance, Assignments, Take Home Exam, Exams 1 & 2).  Put your scores into one row of an Excel spreadsheet where you have your grade calculation functions. • For Grade Calculator Step 1: After obtaining the scores, enter the score information into the provided Grade Calculator by carefully giving the correct actual arguments to the functions. Step 2: Check for the percentage and grade letter from the Grade Calculator. Step 3: Choose whether or not to take the make-up exam 1 or 2 to improve your final results based on your results from step 2.  You have the option of taking both make-up exams.  Remember that you must be well prepared because the make-up exam grade will override the corresponding exam grade (potentially lowering it; see the detailed formula above). Here, follow the two LAMBDAs for grade calculation for one student.  They are also available as an Excel file for the T10 files. =LAMBDA(TH,E_1,E_2,M_1,M_2,LET( COMMENT0,"Calculate  exam  grade  for  one  student  (weighted  percentage)", COMMENT1,"Function  for make-up  exam  rule", override,LAMBDA(exam,mu_exam,  IF(mu_exam=0,exam,mu_exam)), exam_weights,  {0.06,0.25,0.25}, TH_real,MAX(TH,E_1), E_1_real,override(E_1,M_1), E_2_real,override(E_2,M_2), unweighted_exam_percentages,HSTACK(TH_real,E_1_real,E_2_real), result,SUMPRODUCT(unweighted_exam_percentages,exam_weights), result))(100,0,0,100,100) =LAMBDA(Quizzes,Assignments,Project,Attendance, LET( COMMENT0,"Calculate   non-exam  grade  for  one  student  (weighted  percentage)", COMMENT1,"Function  to  drop  lowest  grade   from  a  list  of  grades", average_drop_lowest,  LAMBDA(r,LET(min_a,MIN(  r  ), result,(SUM(  r  )-min_a)/(COLUMNS(  r  )-1),result)), weights,  {0 .1,0 .2,0 .1,0.04}, AvgQuizzes,AVERAGE(Quizzes), AvgAssignments,average_drop_lowest(Assignments), AvgAttendance,AVERAGE(Attendance)*100,  COMMENT2,"Attendance  is  1  or  0", unweighted_non_exam_percentages, HSTACK(AvgQuizzes,  AvgAssignments,  Project,  AvgAttendance), result,SUMPRODUCT(unweighted_non_exam_percentages,weights), debugging,TEXTJOIN("  ",TRUE,AvgQuizzes,"  =AvgQuizzes  ",AvgAssignments,"  =AvgAssignments  ", Project,"=Project",AvgAttendance,"  =AvgAttendance   ",result,"  =result    "), result))(E15:F15,G15:I15,J15,K15:N15)

$25.00 View

[SOLVED] BRMII APPLICATIONS AND ANALYSIS

BRMII: APPLICATIONS AND ANALYSIS Individual Reflection Coursework Introduction & Aims: This individual coursework builds on your Group Research Portfolio and continues your ‘hands on’  business  research experience where you  learn  about, and work through, the  research process. In this individual task, you are asked to 1) analyse the data you collected as a group and discuss your findings, and 2) propose how future quantitative research on the topic could be undertaken. Learning Outcomes: This assignment assesses all Course Learning Outcomes. Specifically, the learning outcomes of this individual coursework are:     Analysing qualitative data     Reporting on, and presenting, the research process followed     Reporting on and discussing the findings     Discussing the contributions and limitations of the study     Designing a quantitative study and creating a plan for implementation of a questionnaire instrument Key Information: •    Due date: 25 April, 2pm •    The individual report accounts for 60% of your final grade for the course. •    The report has a strict word limit of 2,100 words (There is NO +10% allowance – the new word count policy will be strictly followed and penalties for going over the word count will be applied). Students MUST indicate the accurate word count of the report on the cover sheet and note their word count against each section. •    Please note that the Research Questions, Cover Sheet, Section 8 (Reference List), and Appendices 1 (Transcript) and 2 (Questionnaire) are excluded from the word count. No additional appendices are permitted. All tables/figures used count towards the word count. Assignment Structure: Your final assignment must be submitted as an individual report made up of 8 essential sections and 2 essential appendices (See below). Section 1: Research Question – The Qualitative Research Question that guided your focus group(s) should be stated simply and clearly without additional introductions or commentary. Section 2: Analytical Process – Discuss how you analysed the data: the coding process and steps you undertook, referring to relevant literature on qualitative analysis. Briefly outline the data obtained for analysis. (approx. 150 words) Section  3:  Qualitative  Analysis  &   Discussion  -  Analyse  and  interpret  your  findings, organised by themes. Focusing on 2-3 key themes is recommended to allow you to discuss these in depth. Illustrate your point with quotes from your data, appropriately attributing the quote to the participants (using pseudonyms or participant numbers). Discuss your findings, offering a critical analysis and connecting to existing academic literature. You should include a mind map / table / coding tree (included in the word count) showcasing how you organised your codes / categories / themes in your analysis. (approx. 750 words) Section 4: Contributions / Practical Implications: Clearly answer the qualitative research question given the results of the analysis. Reflect on the findings given extant literature and outline how the study adds new knowledge to scholarship and informs practice. (approx. 300 words) Section 5: Limitations and Future Research: Discuss limitations related to the sampling, data collection, analysis, interpretation of findings, and so forth. Consider different ways to advance the research - provide some avenues for future research. (approx. 150 words) Section 6: Proposed Follow-Up Quantitative Study should include the following 5 criteria. Ensure that you justify the choices you make. -    Quantitative research question: Propose ONE quantitative research question based on the analysis of qualitative data. -     Rationale for quantitative study: Discuss a rationale and aims for a quantitative study. (approx. 150 words) -     Proposed research design: Outline how you plan to conduct the quantitative research (i.e., experimental research or survey research), including the data collection methods. (approx.150 words) -    Sampling  &  target  population  should reflect on the  population versus the proposed sample.   Evaluate the potential generalisability of the findings considering the sampling method proposed and the sample that will be obtained. (approx. 150 words) -     Proposed statistical analyses: Indicate the appropriate statistical test(s) that could be used to analyse the data if the data were collected using the questionnaire proposed in Appendix 2. Please provide your rationale for the chosen statistical test(s). (approx. 200 words) Section 7: Short Reflection should provide a critical reflection on the research process and individual learning. Discuss personal growth, challenges, and takeaways from the coursework. (approx. 100 words) Section 8: Reference List provides references in Harvard style. Appendix 1: Transcript(s) from focus group(s) should be anonymised and included in this appendix. Appendix 2: Questionnaire should include items that will be used to answer the quantitative research question proposed in Section 6. It should be professionally presented, adhering to the best practices outlined in the course. This includes 1) proper organisation, 2) using relevant and appropriate measurement for variables, 3) clearly citing the source of each measure as an in-text citation, and 4) ensuring the research question can be effectively answered. (strictly 2 pages) Submission information The submission will be made via Turnitin. Please use your exam number as the file name. Referencing Your assignment must be based on your group’s own work and conducted in accordance with University regulations on plagiarism; please remember that these apply to online as well as traditional sources of material. Remember too that you may not submit the same work for different  assessments.  For  example,  the  assignment  submitted  for  this  course  must  not resemble work submitted for dissertations or projects in other courses. Inadequate  acknowledgement  of  sources  in-text  as  well  as  in  your  reference  list  will  be penalised; not only might this constitute plagiarism, but it also indicates an inability to draw on evidence to support your arguments, which is a basic requirement of academic work. You should provide full references for all sources consulted, including the references for the core articles and all other sources  (e.g. wider reading for the  literature review or articles supporting your methodological arguments), using the recognised Harvard referencing system in your work. The Harvard system lists all sources in alphabetical order, with the information for each item organised as detailed in the following guide:https://www.citethisforme.com/harvard- referencing. Please do not use endnotes or footnotes for referencing, and do not bullet point your reference list. There  is  currently  a  lot  of  interest  in  generative  AI systems.  Please  remember  that  your assignment must be your original work - Be aware that if you use AI tools (such as ChatGPT or others) to generate an assignment (or part of an assignment) and submit this as if it were your own work, this will be regarded as academic misconduct and treated as such. Generative AI tools are language machines and do not produce credible content. More details can be found in the University policy available via:Guidance for working with Generative AI (“GenAI”) in your studies Note that in-text citations count for your word count. Only the reference list is excluded. Assignment Feedback Form Business Research Methods II: Individual Coursework Feedback Form Specific Feedback: Rubric Feedback: SA (strongly agree), A (agree), N (neutral), D (disagree), SD (strongly disagree) 1.   Clearly explains the analytical process followed - 2.   Reports findings in a clear and appropriate manner, reflecting qualitative narrative (rich and detailed): a.   Analysis reflects a variety of appropriate coding procedures, categorising, and developing themes - b.   Data have been analysed - interpreted, made sense of - rather than just summarised, described or paraphrased - c.   Analysis and data match each other - the extracts evidence the analytic claims - d.   Participants, and their voices, are adequately represented - e.   Data is analysed and interpreted with criticality - 3.   Articulates the findings through discussion: a.   Findings are discussed in relation to academic literature - b.   Analysis and discussion address research question(s) - 4.   Discusses how the findings could be used by practice and scholars (contributions) - 5.   Offers a consideration of research limitations and suggestions for future research: a.   Thoughtfully and critically evaluates research limitations - b.   Outlines future research potential - 6.   Proposed Quantitative Study: a.   Has clearly stated aims/purpose - b.   Is properly motivated - 7.   Quantitative study has a strong plan to enact the study: a.   The chosen method is clearly justified - b.   Clearly articulated how to execute the chosen method - c.   Relevant and justified sampling strategy - d.   Propose and justify the statistical tests - 8.   Questionnaire is focused & employs best practices: a.   addresses the research question - b.   uses relevant and appropriate measurement for variables - c.   is properly organised - 9.   Acknowledges sources correctly in text - 10. Acknowledges sources correctly in reference list - 11. Professional appearance (consistent formatting, use of headings, neat, clean tables, avoids screen grabs, etc.) - Note: These rating scales are intended to provide you with some structured feedback on particular aspects of your project. They have not been used in a mechanistic way to calculate your mark. Please read alongside comments provided on your project to obtain more detailed feedback. Individual Assignment - Indicative Marking Guide This guide provides a mark breakdown with qualitative descriptions of what is expected at different bands of the Common Marking Scheme. Below 30 Fail. Little to no evidence of learning or understanding of the task. Material is irrelevant, brief, and of little value. In addition, there may be some sections missing. 30s Marginal Fail. Individual reflection coursework in the 30s will exhibit significant deficiencies in some way, falling short of a pass. Such assignments may lack some of the required sections outlined in the brief or demonstrate serious misinterpretations of the task. They may also include substantial irrelevant material, indicating limited awareness and understanding of the analytical processes associated with qualitative data,  as  well  as  inappropriate  approaches  to  analysing,  interpreting,  and  presenting  findings.  The discussion of implications and/or limitations of the research as well as the avenues for future research are very limited. Reports that show a fundamental misunderstanding of the purpose of a quantitative study, leading to unfocused or irrelevant follow-up proposal and questionnaire, will also fall into this category. 40s Satisfactory  work.   Individual  reflection  coursework   in  the  40s  will   demonstrate  a  weak   or  poor understanding and justification across the  required sections. While there  may  be some evidence of learning, these reports are likely to include misunderstandings, irrelevant material, or significant omissions. The discussion of the analytical process adopted, the interpretation of the data, and/or the discussion of the findings may lack coherence or consistency in quality. The discussion of implications and/or limitations of the  research  as well  as the  avenues for future  research  is  not  clear.  Additionally,  the  follow-up quantitative study  may  not  build well  on the findings  of the qualitative  research,  reflecting a  limited understanding of the research process. The research questions and questionnaire may also be poorly suited to a quantitative study. Furthermore, there may be limited evidence of reading and research, which contributes to its overall weakness. 50s Good work. Individual reflection coursework in the 50s will demonstrate an adequate understanding of what constitutes good qualitative research. The discussion of the analytical process may include some articulation and detail, though it might not be fully developed. The qualitative analysis demonstrates some insight but may remain descriptive or superficial rather than offering deep analysis. The discussion of the qualitative findings references relevant literature though is likely to be brief. The work will give some consideration to its relevance to existing academic literature, along with how the findings can inform practitioners.  The  proposed  quantitative  study  will  generally  be  appropriate,  showing  a  reasonable understanding of the research process. However, it may lack robust justification. The research questions and proposed questionnaire will mostly align with the requirements of a quantitative study, though there may be issues with fully adhering to best practices. Overall, the report may feel somewhat disorganised or slightly incoherent, detracting from its overall quality. 60s Very good work. Individual reflection coursework in the 60s will demonstrate a competent discussion of the analytical process adopted, with an in-depth articulation and systematic discussion of the findings from qualitative research. The qualitative findings will be detailed, and well-substantiated with relevant insights from academic literature, showing clear evidence of close engagement with the data collected and wider reading and research. The academic and  practical  implications of the findings will  reflect very good understanding and engagement with academic literature, alongside deep and thoughtful consideration of how the findings can inform practitioners. Furthermore, the discussion of limitations and future research will show a strong grasp of the research process and an understanding of existing literature in the field. A solid  proposal  for  a  quantitative  study  will  be  presented,  underpinned  by  strong  justification  and accompanied by detailed discussions of the research design, sampling plan, and proposed statistical analyses. The questionnaire will follow best practices and is clearly structured and presented. However, it may be the case that some aspects of the discussions or justifications could have been more detailed, precise, or astute. 70s Excellent work. Individual reflection coursework in the 70s will exhibit a highly competent, comprehensive, and  insightful discussion of the analytical  process  adopted. The qualitative findings will  be  rich and demonstrate critical analysis through thorough and nuanced interpretation of qualitative data, followed by a  clear  and  critical  discussion  of  the  findings  in  relation  to  literature.  The  academic  and  practical implications of the findings will reflect deep and critical engagement with academic literature, coupled with original  and  well-articulated  insights  on  how  the  findings  can  meaningfully  inform  practitioners.  An excellent understanding of the research process and strong engagement with academic literature will also be evident in the discussion of limitations and avenues for future research. The proposal for a quantitative study  will  be  highly  appropriate  and  strongly  justified,  incorporating  a  detailed  and  well-reasoned discussion of the research design, sampling plan, and proposed statistical analyses, with no errors in their identification. The questions included in the questionnaire are appropriate, follow best practices, and are professionally presented. Clear evidence of independent reading and research will underpin the work.  It may be the case that some of the discussions or justifications could have been slightly stronger or more compelling. 80s and above Highly excellent work. Individual reflection coursework in the 80s will demonstrate exceptional depth, sophistication, and incisive critical analysis of qualitative data, accompanied by a highly sophisticated and well-justified proposal for the chosen quantitative study and research plan. The work will show abundant evidence of independent reading and research, with compelling and nuanced discussions of the qualitative data analytical process, the qualitative findings, and the follow-up quantitative study. These discussions will  be exceptionally  rich  in theoretical and  practical  insights, as well as  critical appraisal. An authoritative and excellent understanding of the research process and relevant academic literature, applied to discuss the qualitative research process and findings and to justify the proposed study, will be evident throughout. At the upper end of this range, the work will also demonstrate creativity and originality.

$25.00 View

[SOLVED] GASB33H3 Guidelines for Research Project 2025 Winter

GASB33H3 Guidelines for Research Project_ 2025 Winter In this course, students are required to conduct an independent research project. While they have the freedom to choose a topic, it must be related to a specific aspect of Global Buddhism or a topic covered in this course (please refer to weekly topics). This research project is divided into three parts: Part 1: Select a topic and draft an abstract Part 2: Write a proposal Part 3: Complete the final research paper. Timeline of Completion students will be asked to decide on a research topic and write a 250- 300-word paragraph during the class, providing a general overview of their intended research proposal and research paper. You are allowed to change or modify your topic later. This submission accounts for 10% of the final grade. students will submit a formal research proposal, including the selected topic, an abstract (based on the previously written paragraph), an outline, and an annotated bibliography. The topic of the proposal MUST closely align with the final research paper. This submission accounts for 15% of the final grade. students are required to submit a final research paper that describes and analyses their selected topics, approximately six to eight pages long (font size 12, double-spaced, or 2000-2500 words), following academic writing standards. The final paper constitutes 20% of the final grade. Part 1: Topic and Abstract (10 points in total) 1. TOPIC (2 points): Clearly and succinctly state your research topic in one or two sentences at most, ensuring it relates to the course content by addressing at least one aspect or theme. This does not need to be the final topic/title for your proposal and research paper if your idea is still developing. You are allowed to adjust your research scope or other elements before submitting your proposal and paper, though frequent changes are generally discouraged. ayana, Esoteric chool, Tiantai/TendaiSchool, etc.)Geographic Area(E.g., Chinese, Japan etc.)People(E.g., monastics—monks or nu laymen, non-religious y, or particularasty,Heian/Kamakura period, Mauryan dynasty, ture—Buddhistdicine, food,accessory,etc.)Research Perspective(E.g., Political, social, cultural oreconomicstudies,religions,transnational and global 2.    ABSTRACT (8 points): Your abstract must be between 250 and 300 words (being less or exceeding 20 words won’t result in any deductions) and include the following parts: 1) Ask Research Question(s): Pose your central research question.  Your first question should be a “why,” “how,” or “what” statement seeking an explanation of a phenomenon. Define your central question(s) in one or two sentences focusing on a debate, puzzle or dilemma. For example, why did Buddhist women in the Tang dynasty choose to be buried separately from their husbands? 2) State the Research Objective: What outcomes do you aim to achieve? What   argument do you intend to make? What debate or confusion are you seeking to resolve  by starting this research? (You do not need to cover all these aspects— stating only one objective is sufficient.) 3) List Research Sources and Approach(es): You must refer to course readings to write your final paper, which will have specific instructions outlined in Part 2. However, at this stage, you only need to indicate the types of sources you intend to consult for your research project. These may include primary sources such as Buddhist texts, historical records, archaeological reports, or personal field investigations (i.e., epigraphic sources as shown in my sample abstract), as well as secondary scholarship, such as monographs and peer-reviewed articles relevant to your topic. Additionally, you should specify a clear approach or methodology you want to incorporate to write this paper, detailing how you would collect and analyze the data/materials to demonstrate your critical thinking, creativity or innovation perspective. 4) Provide a Research Plan: It is recommended that you include a brief research plan outlining how you intend to carry out your research project during the remaining  time of this course, which can also be specified and developed as your proposal outline in Part 2. 5) Writing: Since this piece must be completed quickly during class, the main emphasis will be on the quality, content, and coherence of your writing. Minor grammatical errors may be excused. [Sample Topic and Abstract (Do not distribute outside the course!)] Topic:  This  study examines how Chinese Buddhist laywomen navigate their lives amid suffering, including  bereavement, death, and rebirth, as reflected in the epigraphic materials from the Luoyang area during the Tang Dynasty (618–907 CE) from cultural and social perspectives. [Explanation: This topic is related to the themes of gender and death in this course; Chinese Buddhist tradition; Laywomen; Central China—Luoyang; Tang Dynasty] Abstract: [Asking Research Questions] How did women in the Tang dynasty (618–907 CE) cope with the loss of loved ones and manage their grief by seeking refuge in  Buddhism?  Why did some Buddhist laywomen, when facing their death, choose Buddhist funerary rites that deviated from traditional customs? Furthermore, what religious and social aims were they striving to achieve through these practices? [Research Objective, Sources and Approaches] This  study aims  to explore  the religious beliefs and practices of Buddhist women and to reveal their strategies for navigating drastic life changes, particularly those related to death—whether it involves  coping with the loss of loved ones or confronting their own end. By drawing on personal experiences and reflections found in epitaphs and dedicatory inscriptions related to these women, we can trace their life trajectories and uncover details about their religious commitments, including the specific Buddhist teachings they followed or the distinct Buddhist community they engaged with. Additionally, examining archaeological evidence from their burials provides a material dimension to the “actual” thoughts of these women, offering a comprehensive understanding of how their aspirations were realized and their roles in promoting local Buddhism. [Research Plan/Progress] Focusing on four Buddhist women from the capital, Luoyang, this paper explores their religious practices and social networks in three stages: First, identifying the significant  life changes that motivated them  to embrace Buddhism; second, investigating the specific Buddhist practices and teachings they upheld to cope with these changes, both for their mental support and social standing; and finally, reconstructing the original contexts they established to achieve their ultimate religious  aspiration, striving for transcendence of worldly concerns and the attainment of an ideal afterlife or rebirth. Part 2: Research Proposal (15 points) Your proposal must include the following sections: 1.    Topic and Abstract 2.    Outline 3.    Annotated Bibliography 1) Title and Abstract (2 points): •    Create a title based on the topic you developed in Part 1. •    Revise, refine, or enhance your abstract based on your draft in Part 1. •    Your abstract should be written clearly and free of obvious spelling and grammar errors. 2) Outline (3 points) Consider how to organize responses for research questions and outline the steps required for your research project in a clear and logical manner. Please provide a comprehensive and detailed structure (bullet points may be used) that conveys an overview of your research project. It should have headings and at least some sub-headings. 3) Annotated Bibliography (10 points) Please cite at least TEN sources (no more than 15) in your bibliography. At least FIVE of your sources MUST come from the course readings (no more than three sources from the same reading). Choose a citation format (APA, MLA, or Chicago) and maintain consistency. Please refer to the following links: https://www.utm.utoronto.ca/rgasc/student-resource-hub/writing-resources or https://owl.purdue.edu/owl/avoiding_plagiarism/guide_overview%20.html. Provide a TWO to THREE-sentence summary explaining why you have chosen each source and how it relates to your research project. Pay attention to the relevance of the selected sources. Part 3: Final Research Paper (20 points) Please elaborate on your idea, perform. a thorough analysis, and complete the final paper based on your research proposal. Your final paper will be evaluated on these FOUR criteria: 1.   Organization and Content 2.   Descriptions and Examples 3.   Critical Analysis and Creativity 4.   Citations, References, and Grammar The rubrics for grading the final paper will be made available prior to submission.

$25.00 View

[SOLVED] COMP1860 Building Our Digital World Computer Systems and Architecture

Building Our Digital World:  Computer Systems and Architecture COMP1860 Activity Sheet 2.5 This worksheet contains a combination of formative  activities  (which contribute towards your learning) and summative activities  (which you will complete and submit to be assessed as part of your portfolio). Every exercise marked with a red border is a summative exercise and must be submitted as part of your portfolio. You should use PebblePad to submit portfolio activities. In addition, you may be required to submit other activities — the module teaching staff will provide instructions. Activities marked by (*) are advanced, and may take some time to complete. Expectations: 1.  Timeliness   You  should complete all of the activities in the order provided and submit your portfolio evidence on PebblePad before the completion date (Friday, 07/03/2025, at 17:00). 2.  Presentation   You should present all of your work clearly and concisely following any additional guidance provided by the module staff in the module handbook. 3.  Integrity   You are responsible that the evidence you submit as part of your portfolio evidence is entirely your own work.  You can find out more about academic integrity on the Skill@library website.  All work you submit for assessment is subject to the academic integrity policy. Feedback:   Feedback on formative activities will be provided via Lab classes and tutorials.  Feedback on evidence submitted as part of the portfolio will be available on PebblePad. Support opportunities:    Support with the activity sheet is available in the Lab classes and tutorials.  Individual support is available via the online booking system. Expected time for completion:    2-3 hours. Expected complete date:   Friday, 07/03/2025, at 17:00 Coursework summary In this activity sheet, you will be implementing a small library to compute the parity of a bit string.  Using a single parity bit generates a code that can detect a single error but cannot correct it.  Useful references for this activity sheet are the same as for Activity Sheets 2.3 and 2.4. Learning outcomes On completion of this activity sheet, you will have: 1.  developed and tested a simple error-detecting code; 2.  implemented a library of functions to implement encoding and decoding of a parity-based code; and 3.  utilised a simulator of the Hack Virtual Machine to test and debug the library. Instructions Please submit your Sys .vm file to the Activity Sheet 2.5 assessment on Gradescope.  To complete this activity sheet, your solution to the portfolio question will need to pass at least 75% of the tests.  When this happens,  Gradescope will return an 8-character string for you to add as evidence in the PebblePad workbook for this activity sheet. Outline.   This activity sheet will help you develop functions for the Hack Virtual Machine to encode a 15-bit string into 16 bits with a single parity bit. The Hack computer uses 16-bit words, and for this activity we will assume that the 15 right-most bits contain the 15 bit of information, whereas the left-most bit contains the parity of the string. We will consider that the index of the bits in a string increase from right to left, starting from 0.  For example, the 16-bit string a has the bits: a15  a14  a13  a12  a11  a10  a9  a8  a7  a6  a5  a4  a3  a2  a1  a0 For our purposes, a15  is the parity bit, which can be computed from the 15 information bits a14 , a13 , . . . , a1 , a0 . The parity bit can be defined in many ways. One simple definition is In other words, the parity bit is 1 if an odd number of information bits are set to one, and it is 0 if the number of information bits sets to 1 is even. Using logic operations, the parity bit can be defined as a15  = a14  ⊕ a13  ⊕ a12 ⊕ a11 ⊕ a10 ⊕ a9 ⊕ a8 ⊕ a7 ⊕ a6 ⊕ a5 ⊕ a4 ⊕ a3 ⊕ a2 ⊕ a1 ⊕ a0 ,                     (1) where ⊕ denotes the exclusive  or  (xor) operation. The steps below will guide you in the development of this project using (1). The description is very detailed, but the implementations themselves are not long. The longest and most difficult functions to implement are Sys .shiftLeft (no more than 20 lines) and Sys .computeParity (around 30 lines). You can use the supplied file Sys .vm as a skeleton for your final submission – you can do things differently, but you should ensure that your Sys .vm runs as expected with the supplied Sys .tst. Otherwise, your submission might not work as expected on Gradescope. Bit operations.   In this activity sheet, you will deal with bit strings, and you will therefore rely heavily on bit-level operations. The tool to achieve this kind of low-level manipulations are  bitmasks, which can be used to set, clear, and read individual bits from a bit string. Setting a bit means that the result of the operation will have that bit equal to 1, and all other bits will remain unchanged. If that bit is already a 1, the operation does not modify it.  To set the ith bit, you should create a bitmask with all bits but the ith set to 0, and you should take the bitwise or between this bitmask and the value whose bit you want to set.  This bitmask is equivalent to the decimal number 2i.  For example, to set the third bit from the right, the bitmask to use is 0000000000001000 which is equivalent to the decimal number 23  = 8. In the Hack virtual machine, we can set the third bit of the string representing 4023 with the following three instructions push  constant        8  //  0000000000001000 push  constant  4023  //  0000111110110111 or                              //  0000111110111111  is  at  the  top  of  the  stack . The same bitmask can also be used to read the value of the ith bit of a string, taking the bitwise and between the bitmask and the value whose ith bit you want to read.  For example, we can read the third bit of the string representing 1323 push  constant        8  //  0000000000001000 push  constant  1323  //  0000010100101011 and                               //  0000000000001000  is  at  the  top  of  the  stack . To check whether the bit is set, you can compare it with 0 using either eq or neq. Clearing a bit means that the result of the operation will have that bit equal to 0, and all other bits will remain unchanged.  To clear the ith bit, you should create a bitmask with all bits but the ith set to 1, and you should take the and between this bitmask and the value whose you want to clear.  In 2’s complement, this bitmask is equivalent to the decimal number —(2i + 1). For example, to clear the third bit from the right, the bitmask to use is 1111111111110111 which in 2’s complement is equivalent to the decimal number —(23 + 1) = —8.  In the Hack virtual machine, we can clear the third bit of the bit string representing 348 as follows push  constant      0 push  constant      9 sub                           //  1111111111110111  is  at  the  top  of  the  stack . push  constant  348  //  0000000101011100 and                           //  0000000101010100  is  at  the  top  of  the  stack . 1. Implement the function Sys .xor, which computes the bitwise exclusive  or  of the two values at the top of the stack.  You can test your implementation in the emulator by changing the function definition of  Sys .init in Sys .vm to: function  Sys .init  0 push  constant  12  \  00000000  00001100 push  constant    6  \  00000000  00000110 call  Sys .xor  2     \  00000000  00001010  is  at  the  top  of  the  stack . This code pushes 12 and 6 onto the stack, and the call to Sys .xor should leave 10 at the top of the stack. 2. Implement the function Sys.shiftLeft  2, which shifts the first argument left by as many positions as specified by the second argument.  shiftLeft(x,  y) is equivalent to the C operator x  

$25.00 View

[SOLVED] Time series analysis for forecasting

Problem statement: Power utilities are responsible for purchasing and selling energy in the market to meet the demands of their customers. Accurate load forecasting is crucial, as it enables utility companies to provide electricity at the lowest possible prices for their customers. Overestimating energy needs can result in financial losses and potential penalties from state regulators due to the unused energy. Conversely, underestimating energy needs can force companies to purchase additional energy in 'real time' at higher prices. Most of the forecasted load is purchased 16 hours before the start of the "flow date" in the "day-ahead" energy market. The flow date refers to the day on which the energy is consumed by customers. Any remaining load is purchased in the "real-time" energy market to accommodate fluctuations in demand. For instance, on Monday morning, the day-ahead market opens at 6 AM and closes at 8 AM, during which a utility company buys the energy required to meet customer demand on Tuesday (starting at midnight). This means the company purchases energy approximately 16 hours before the flow date begins. A 16-hour load forecast is generated every morning, and typically, utility companies aim for a Mean Absolute Percentage Error (MAPE) of under 2% for these forecasts. Accurate load forecasting helps utility companies optimize their energy purchases and minimize costs, ultimately benefiting their customers and ensuring efficient operation within the energy market. Task: While the accuracy of the 16-hour forecast is of paramount importance, power utilities also generate forecasts for various time horizons, such as 7-day, 90-day, and 1-year ahead forecasts. Long-term one-year electricity load forecasts are essential for several reasons, including efficient resource allocation, infrastructure planning, and integration of renewable energy sources. The objectives of this assignment are to create long-term, one-year forecasts for the following goals: Goal 0: fitted values for the models used to achieve Goals 1 - 3 described below. Goal 1: Forecast hourly load for 2006. Your team's ranking in the leaderboard for hourly load forecasting will be determined based on the Mean Absolute Percentage Error (MAPE) of your forecasts for the year 2006. A lower MAPE corresponds to a higher ranking. Goal 2: Forecast daily peak loads for each day of the year 2006. Your team's ranking in the leaderboard for daily peak load forecasting will be determined based on the MAPE of your forecasts for the year 2006. A lower MAPE corresponds to a higher ranking. Goal 3: Forecast the timing (hour) of the daily peak loads for the year 2006. Your team's ranking in the leaderboard for daily peak load timing forecasting will be determined based on the Mean Absolute Error (MAE) of your forecasts for the year 2006. A lower MAE corresponds to a higher ranking. By participating in these goals, you will contribute to the development of more accurate long-term energy forecasts, which can ultimately benefit utility companies, customers, and the overall stability of power grids. Accurate one-year forecasts enable power utilities to make informed decisions on resource allocation, plan for necessary infrastructure upgrades, and better integrate renewable energy sources to meet the growing demand for clean energy. Data: A US-based utility company has provided the dataset for this project. Load forecasting is heavily influenced by weather conditions, as they directly impact electricity consumption patterns (e.g., increased or decreased use of air conditioning or heating systems). The file CompetitionData.xlsx https://github.com/robertasgabrys/Forecasting contains not only hourly electricity load data but also hourly average, median, maximum, and minimum temperature (in Fahrenheit) for an undisclosed city in the US. The dataset includes electricity load and temperature data from 2002 to 2005 (a total of four years), while only temperature data is provided for 2006. Here is a snapshot of the first day in 2002 data: · During the first hour on January 1, 2002, the average, median, maximum, and minimum temperatures were recorded as 43, 43, 60, and 31 degrees Fahrenheit, respectively. In this same hour, a total of 1,384,494 MWh (megawatt-hours) of electricity load was consumed. Deliverables: Please submit the following items for the competition: · Competition Template File: Submit the completed SubmissionTemplate.xlsx file, which contains three sheets with your predictions: o Goal 0: provide the hourly load fitted values for 2002-2005. o Goal 1: In the sheet named "Goal 1", provide the hourly load forecast for 2006. o Goal 2: In the sheet named "Goal 2", provide the daily peak load forecast (maximum hourly load for each day). o Goal 3: In the sheet named "Goal 3", provide the forecast for the timing of the daily peak load (hour in which the peak load occurs). The report should cover the models you built and their performance. In your report, ensure that you clearly explain the models you have developed, their underlying methodologies, and the overall performance of these models. Highlight any challenges you faced during the process and the solutions you employed to overcome them. Additionally, discuss any insights or patterns you identified in the data, and how they influenced your model selection and forecasting strategy.

$25.00 View