Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Project 2 The GAME

Project 2 – The GAME !!!!! Your mission is to forget everything you know about school and your education and to make the wildest project of your life. To take all of the knowledge you have ever learned FROM Games , and explore the open world of WHY games matter , using the 11 weeks of maps from your training guide: Remaking Reality !!!!!!!! The Enemies and Bosses you MUST defeat ! 1. Boring idea and school like projects 2. DOT POINT DEMONS – Eliminate them from the entire Project Universe! 3. Thick Dense Block Text – employ the weapons of design – typography – creative design – and crazy creative ideas to break up the text and don’t let it force us to read it’s  small ugly black print – Destroy the Left to Right Conventional Demons by going next level using color , open space and new ways of letting a reader experience the text not just having it explained . 4. Videos with long voiceover FOrMaL Explanations 5 . The evillest Boss of all – The ESSAY BEAST – Avoid at all costs and Destroy ESSAYSAURUS who locks your brain into conformity if you fail ! The Objective – Find new endings by your choices and surprise the Game by Modding your own version that goes Viral , Makes you a young Billionaire and get you the highest marks in the History of Your Training Academy! Go forth and CREATE !    

$25.00 View

[SOLVED] MSCI 231 Seminar Questions

MSCI 231 Seminar Questions Please prepare discussion points in response to the following two tasks: 1. Please find a modern slavery statement for a company of your choice – perhaps a brand that you buy from regularly – come prepared to discuss this statement and to compare it with the Marks & Spencer’s 2024 statement provided on the Moodle site. 2. Reflect on your purchases (food, clothing etc), and think about the packaging choices you are making, and those of the businesses you buy from.  What could you change / what could the businesses change?

$25.00 View

[SOLVED] ACCT6003 Fundamental Analysis for Equity Investment

ACCT6003 Fundamental Analysis for Equity Investment Final Take-Home Project The Final Take-Home Project is worth 40 marks. The final take-home project instructions will be released during Week 7, on the 10th  of April 2025, thus giving you about 8.5 weeks to work on the final project. You should start working right away as there is much to do. Deadline The deadline for submissions is 11:59pm, Tuesday 10 June 2025. Submissions made past this hard deadline attract a penalty as per standard University policy, as follows: •    Deduction of 5% of the maximum mark for failing to submit by  11:59pm every day that passes after the due date. This is equivalent to a penalty of 2 marks per day that passes the 11:59pm mark. •    After  ten  calendar  days  late,  a mark of zero will be awarded for the final project if no submission is made. Project overview You have been randomly assigned a company that is currently listed on the ASX, and you are asked to act as a sell-side analyst who applies fundamental analysis for equity investment to estimate the intrinsic value  of this  company.  You  are  required  to  make  an  investment  recommendation  on whether to buy or not to buy the stock of your assigned company. Your recommendation must include a discussion on the investment horizon, on how long one should buy-and-hold before the price is increases, or how long one could short-sell the stock before the price starts falling. You are required to go through everything that you have learned in ACCT6003 to identify the most appropriate  types  of  analyses  and  approaches  that  would  help  you  appraise  the  value  of  the company that you have been assigned. Different companies may require different types of analyses and  approaches  and  certainly  would  require  different  inputs  depending  on  their  business fundamentals. Assigned company The list of assigned companies is provided on Canvas in the page ‘Final take-home project’ in the file  named  acct6003_final_assigned_companies.csv.  Each  company  is  assigned  using  an   SID number. Search by your own SID to find the assigned company. All companies have been randomly assigned using a random number generator. The companies have been selected based on the following criteria: •   It is a publicly traded company that is currently listed on the Australian Stock Exchange. •   It is not on the ASX100 index (see www.asx100.com), as determined on the 5th  of April 2025. •   It is not a company that is discussed in case studies or lecture slides (i.e. not Vista Group, G8 Education, Qantas, Woolworths, Humm Group, Air New Zealand). •   It is not a financial company (e.g. bank, credit institution, insurance). •   It is not a stapled security, a trust, a managed fund or investment group. •   It is not a holdings company (i.e. holding stocks from many other companies). •   It is not a real estate management company. •   It is not a regulated utility company (e.g. energy supplier, water company, public transport). •   It is not an energy, oil or gas exploration and extraction company. •   It is not a mining company. •   It did not have suspended trading as at 5th  of April 2025. •   The company was listed on the ASX before 1 July 2019. •   Its market capitalisation on the 5th  of April 2025 was greater than AU$100 million. •   Its stock has been actively traded for the last year. This  leaves  only  medium-sized  companies  that  are  not  too  complex,  have  stock  liquidity  and straightforward operations. It is possible that we have missed something, so you are required to check that your company satisfies all the above criteria and let us know if you suspect a violation. Mergers An  additional  condition  that  is  difficult  to   identify  without   close  examination  of  the  ASX announcements is that the assigned company has not merged with another company during the last three years, because that would suggest a material structural change in its operating profile. You are required to closely examine the company that you have been assigned to learn whether there have been any mergers in the last three years. If there has been a merger, then please get in touch with the Unit coordinator to discuss the issue and potential allocation of another company. The only way to find out conclusively this information is to search the ASX announcements of this company for the last three years. Go to https://www2.asx.com.au/ and search for your company’s ticker, and then click on “See all announcements” . Note that it is not a problem if your company has acquired other smaller companies. This is a normal part of business. Many companies grow through acquisitions. In  fact, you  should treat acquisitions  as  an  important   source  of  information  for  understanding  prospective  growth  in earnings. There would be a problem only if two companies have decided to merge into a much larger and more complex entity (i.e. a joint merger and not an acquisition). Takeover target In addition, you  should  also  find out whether your company is a current takeover target from another larger company. This information would also be posted on the ASX noticeboard and should be recent. If you find that your company is going through a process of selling its shares to another larger company, or considering doing  so, then again get in touch with the Unit coordinator to discuss the issue and potential allocation of another company. Financial year-end and annual reports Most ASX companies have a 30 June financial year end, but some choose to have a year-end on the 31st of December or some other date. The financial year-end does not matter for this project because every company would have already published the 2024 annual report, and none would have published the 2025 report by the 10th  of April 2025 where the final project is assigned.  You are required to obtain the annual reports and other publicly released information from the companies, for at least the last 5 years, from 2020 to 2024. To find the annual reports and other company publications relevant for investment appraisals, you can search the investor section of the company’s website, or on the ASX notice board, or in the Morningstar DatAnalysis database. Excel template On Canvas, you will be provided with an Excel valuation spreadsheet guideline to help you develop the valuation analysis using the Residual Operating Income (ReOI) model. Do not attempt any cash-based valuation or the generic RIV valuation. Apply only the ReOI model. This valuation template will be made available after Week 8 when we discuss the ReOI model. Note that the Excel template is just a guideline on how to approach the valuation exercise. It lists the necessary types of analysis that must be completed as you prepare for the ReOI valuation. Feel free to change the Excel template to suit your analysis. The analysis of different companies requires different information. Valuation date Your valuation date will be as at the date of the submission of your report. For example, if you submit the report as at 6th  of June 2025 then this would be your valuation date. To do that, the ReOI valuation model requires knowledge of a baseline estimate for Common Shareholders Equity at time 0, CSE0 , and a beginning value of Net Operating Assets for the first year, NOAt  1 , as shown in the ReOI formula of the Week 8 lecture handout. You can take the most recent value of CSE0  and NOAt  1  from the 2024 year-end report, which everyone should have access. For example, let say that your company has year-end report as at 30 June 2024. This means that the ReOI model will estimate the intrinsic value P0  at 30 June 2024. To project the intrinsic value of equity to a future date, let say the 6th of June 2025 (i.e. 341 days lapsed), we need an estimate of the required rate of return and the intrinsic value at the beginning of the period, and apply the continuous compounding formula: FV = PV × er×t where FVis future value, PV is present value, r is a rate, t is years, and e = 2.71828. As an example, consider the estimation of intrinsic value of P30 Jun 2024 = $12.50 per share using the ReOI model. Let say also that the cost of equity is re  = 0.1, which we assume that it will stay the same from 30 June 2024 to 6 June 2025. Then, the expected intrinsic value as at 6 June 2024 will be equal to: P6 Jun 2025  = P30 Jun 2024  × ere ×t  = 12.50 × e0.1×341/365  = $13.7241 per share where 341 are the number of days that have lapsed from 30 June 2024 to 5 June 2025. Discovery of information A large part of the final project relies on the discovery of value relevant information. You can form. expectations using information from any relevant source. The information may be drawn from past annual reports, the annual reports of competitors, company announcements, related financial news posted on ASX or other financial news outlets, historical data analysis on past financial statements using data from databases, macroeconomic information related to the company’s  business environment and industry, socioeconomic information, demographics, natural events and more. See the relevant Canvas announcement on what constitutes trustworthy and reliable information. Make sure to clearly reference all sources used. You can choose any finite forecast horizon and any terminal year as you see appropriate, but you are required to justify your choice using evidence. Cost of capital You are required to use the Weighted Average Cost of Capital (WACC) as the discount rate for the ReOI valuation. The cost of equity component should be calculated using the Capital Asset Pricing Model (CAPM). If the CAPM does not yield a reasonable estimate, then you must explain why this is so and apply an alternative qualitative estimate as you judge to be right. As for the cost of debt, you can use any reasonable approach as discussed in the lectures and the workshops. Assessment criteria There is not one correct answer, and you are not assessed on the accuracy of the valuation. No one knows what the correct valuation of a company is. All valuations are intrinsic opinions. You will be assessed on your ability to: (i)    Apply residual operating income valuation, ReOI. (ii)   Identify and justify key factors that drive equity value within the context of the company that you have been assigned. (iii)  Analyse informative quantitative and qualitative data, including financials, news, industry and economy data, historical financial data, annual report information, macroeconomic data and any other useful information. (iv)  Provide evidence-based convincing argumentation for justifying the forecasts, the WACC calculations, and the valuation model. (v)   Explain the limitations of your analysis and the sensitivity of key assumptions. (vi)  Produce an investment recommendation with a recommended investment horizon. (vii) Present quality professional report with supporting documentation including tables, graphs, and a list of references. General advice Do not fiddle around with the data. To run a valuation analysis, first perform fundamental analysis and business analysis, then forecast and then decide on model parameters. Adjust the valuation only if you have made a computational error or have discovered new information, but not because it did not give you the answer that you thought it would. Use the word limit wisely. Report only what is absolutely useful and is important for documenting and  explaining  the   analysis.  Any   general  introductions,  generic  definitions,  and  superfluous explanations about theoretical concepts will attract negative marking. Go straight to the analysis. Assume that the reader is an expert financial manager who wants to read a concise evidence-backed valuation analysis so that s/he can decide whether to buy or sell. Submission Your submission should take the form of a concisely presented professionally report of maximum 3,000 words. The word count excludes the reference list, tables and graph. Do not include any appendices in the report. The completed Excel template will serve as your list of appendices and supporting information. You are required to submit both the Word file and the Excel valuation file in TurnitIn on Canvas. The Excel file should contain all calculations used in the valuation report. Do not identify your names on the Word report or the Excel file. Marking will be anonymous. You should only identify yourself through your SID. Name the files acct6003_final_project_SID.docx and acct6003_final_project_SID.xlsx, by substituting SID with your personal student identifier. Queries Any queries about the final take-home project must be posted on Canvas so that all students benefit equally from the answers. Report structure and marking criteria The report should start by offering a very brief executive summary of only a couple of sentences. The Executive summary should identify the company, the valuation date, the traded price on that date, the intrinsic price that was calculated, the investment recommendation and the investment horizon.   Then,  the  report must contain  seven sections enumerated using the exact headings as described below. Here is an example of a report structure. Under each section, we also describe what is expected to be developed as part of the valuation report and how marks will be allocated. Example Valuation report for ABC Ltd Executive summary As at 6th  of June 2025  [here enter the date that you submit the final project], ABC Ltd traded at $4.47 p/share on the ASX. This valuation report estimates an intrinsic equity value of $7.23 p/share and recommends to BUY the share of ABC. The price is estimated to continue rising during the next 36 months. Section 1. Business Fundamentals In this section, you are required to demonstrate solid understanding of business fundamentals, the industry, and the relevant operational environment of the company. You are required to identify the business model, business strategy, competitors, and key competitive threats. You must identify and discuss how the company captures economic value and the key drivers of the company’s current position and performance using judgment, evidence, and data analysis. The lessons learned from the lectures of Week 1, Week 2 and the Week 2 and Week 3 assignments can help you complete this task. [9 marks] Section 2. Accounting analysis and earnings quality You are required to discuss the analysis of earnings quality, the identification of key reporting targets  for  this  company,  and  any  suspicious  events  or  red  flags  from  financials  that  may  be alarming.  You  are  encouraged  to  apply  accounting  adjustments  where  necessary.  You  must demonstrate  an  in-depth  understanding  of  the  key  accounting  policies  for  this  company,  the accounting  flexibility and compare accounting judgment across direct competitors. The lessons learned from the lecture of Week 3 and the Week 4 assignment can help you complete this task. [6 marks] Section 3. Reformulation of statements You  are  required  reformulate  the  income  statements  and  balance  sheet  statements  informing financial  ratio  analysis,  for  the  latest   5  financial  years   at  minimum.  You  must  provide   an appropriate and clear presentation in the Excel file to enable the examination of the reformulation, including notes for documenting the sources of information and any assumptions made. In the Word file, you are required to discuss the most important signals obtained that are a direct result of the reformulation of the statements. The lessons learned from the Week 7 lecture, the Week 8 lecture, and the Week 9 Assignment can help you complete this task [3 marks] Section 4. Financial analysis You  are  required  to  present  convincing  evidence-based  arguments  using  data  and  theory  and involving detailed application of financial ratio analysis in order to inform prospective analysis of performance, liquidity, solvency, efficiency, and all inputs relevant to equity valuation. Your focus must be provided convincing arguments that justify your prospective analysis. The lessons learned from  the  Week  9  lecture,  the  Week   10  lecture,  the  Week   10  Assignment  and  the  Week   11 Assignment can help you complete this task. [9 marks] Section 5. Cost of capital You are required to estimate the cost of equity using CAPM, the cost of debt, and the cost of operations using the WACC. You must estimate CAPM using raw market data and the cost of debt using available information. All calculations must be reported in the Excel file In the Word report, you are required to briefly discuss the underlying uncertainty and appropriateness of your calculated cost of capital. The lessons learned from the Week 5 Assignment and the Week 6 Assignment can help you complete this task. [4 marks] Section 6. Credit analysis You  are  required  to  analyse  the  credit  risk,  creditworthiness,  liquidity  and  insolvency  of  our company. You must provide a focused appraisal from a credit analyst perspective discussing capital structure, leverage strategy, cost of debt, cash management, and anything else that would interest a credit analyst. The lessons learned from the Week 11 lecture and the Week 12 Assignment can help you complete this task. [4 marks] Section 7. ReOI valuation and sensitivity analysis You are required to apply the Residual Operating Income (ReOI) valuation model to estimate the intrinsic value of your company. You are required to provide a critical discussion of the ReOI key inputs  and  any  important  limitations  in  conducting  the  ReOI  valuation.  The  discussion  on limitations must be directly related to available information and not a generally discussion of theory or the valuation model. You are also required to carry reasonable sensitivity analysis and stress- testing of key assumptions that drive valuation. The sensitivity analysis should stress-test those assumptions that are most uncertain. The lessons learned from the Week 4 lecture, the Week 5 Assignment and the Week 6 Assignment can help you complete this task. [3 marks] Reference list Here type-in the reference list of all sources of information used in the report. The remaining [2 marks] are allocated to the presentation of a quality professional report, including the effective use of tables and graphs, quality references and properly cited sources, easy-to-read report, and professional use of language with non-superfluous or speculative arguments lacking evidence. You are encouraged to use small tables and graphs to support your analysis in the report. Longer tables and data should be left in the Excel file, which will serve as your appendix.

$25.00 View

[SOLVED] INFS5720 Business Analytics Methods Term 1 2025

INFS5720 Business Analytics Methods Individual Assignment Term 1, 2025 This assignment covers Lecture 1 to 3. It accounts for 15% of the final grade for Business Analytics Methods. The deadline is 21 March 2025, 15:00:00. Do not wait till last minute. Late submissions (even by a few seconds) will still be marked as late submission by Moodle. The teaching team strictly follows the flagging mechanism of Moodle. UNSW has a standard late submission penalty of: 5% of the full marks per day capped at five days (120 hours) from the assessment deadline, after which a student cannot submit an assessment no permitted variation You are to submit a WORD document (not PDF) to Moodle, Left menu > Assessments Hub > Individual Assignment > Individual Assignment Submission. Turnitin is turned on to check similarity score among all submissions. To avoid a high Turnitin score, do NOT copy the assignment questions into the report. The similarity score is not generated upon submission. This is to avoid students relying on Turnitin score and tune the similarity score by repeated resubmission. If the work is done independently, the similarity score should not be an issue. Every page’s header should contain Your zID, similar to this Individual Assignment guideline file. Do NOT write your name. A cover page is optional. Please use "Your zID" for Submission Title when you upload. The file name should also be “Your zID.docx”. Submissions that do not adhere to this will be penalized. Details of report format: Length: should not exceed 4 pages, including the relevant graphs, tables, references, screenshots, and appendices (if any), but excluding the cover page (a cover page is optional). This limit is deliberately set as 4 pages, to ensure that AI’s lengthy answers are summarized succinctly and to the point. Font Style. Times New Roman for writing; Courier New for code (if any) Font size: 12 for writing; 10 for code (if any) Line spacing: 1 Margins: 1 inch or 2.5cm for the top, bottom, right and left Include the page number on each page Up to 25% of full marks as penalties will be imposed for inappropriate or poor paraphrasing. Serious cases will be investigated. More information on effective paraphrasing strategies can be found on https://www.student.unsw.edu.au/paraphrasing-summarising-and-quoting. Your writing should be succinct but not at the expense of excluding relevant details. Use plain and simple language. Some questions may not come with absolutely right or wrong answers, and you have the liberty to express your views about the problem. However, your points must be supported by evidence and sound reasoning. It is the quality and not the length that counts. Make sure you follow the report guidelines and style. specified in this assignment. Please follow APA style. of referencing. More details can be found at https://www.student.unsw.edu.au/apa. Where students use ChatGPT or any Generative AI tool in their work, this must be appropriately cited according to discipline norms, e.g., right below the written paragraph that used Generative AI, or included in appendix. How to reference Generative AI within APA can be found at https://apastyle.apa.org/blog/how-to-cite-chatgpt Any student may be called upon to provide a viva voce (from the Latin meaning ‘living voice’) for any assignment. A viva voce is an interview style. meeting where you will be asked to explain, discuss, or use information related to any assignment or work produced for this course. These can be used to ascertain knowledge and ability  including the extent to which the student has undertaken the required reading, done preparatory work and can demonstrate understanding of what they have written or presented. Viva voces are used in conjunction with submitted assessment work not instead of submitted work. (Used with permission created by Assoc Prof. Lynn Gribble, UNSW Sydney.) The answers should be presented in order according to the sequence of the questions listed in the assignment; that is, in the order of Q1 a), Q1 b), Q2 a), etc. You can have  several sub-sections within a section if you deem appropriate. The report must be self- contained. It is essential to include all relevant tables and figures as evidence to support your answers. Summary: • Write in plain English clearly and succinctly • Write appropriately to the context (AI’s answer is usually too generic) • Provide a reference at the end of the report • Good overall presentation of the report Overview “Individual Assignment.ipynb” is to guide students with standard operations on data set, and, in some cases, provide model implementation that is almost complete, so that students can focus on interpreting the results. Do NOT submit the .ipynb file. The total marks of this assignment are 60 marks. As an Analyst in the Analytics team of a women's hospital, your role is to analyse patient data from the diabetes diagnosis process. Your goal is to uncover patterns, assess risk factors, and provide insights that can help improve early detection, patient care, and treatment strategies for diabetes within the female patient population. The dataset is in ‘Diabetes_Diagnosis.csv’. The description of the table is in ‘Diabetes_Diagnosis_Description.xlsx’ . Before you run any code ofa sub question, please read the description and the instructions for that sub question in the code file very carefully, to understand the purpose of the code and how to run the code correctly. Question 1 We will use K-means to study the hidden patterns in this dataset. Pre-processing step uses normalisation, with MinMaxScaler, with predetermined min and max, to reduce the range of all columns to [0,1]. This is important for all variables to have equal impact on the clustering results. (a) There are two options to run K-means clustering algorithm. Option 1 is to use all columns. Option 2 is to exclude ‘Outcome’ column. The given code produces each variable’s distribution in each cluster and specifically compares each scaled and original column's mean and median values across all clusters. Discuss which option produces more useful clustering results and why. (10 marks) (b) In the given code, we run K-means with k ranging from 2 to 15 and plot the elbow line with respect to Sum of Squared Distances. A plot regarding the Average Silhouette Score is also provided for your reference. Pick the best k in your opinion and state your reason why this k value is the best. (10 marks) (c) Rerun K-Means with the best k value in your opinion. Run the given code to see the data distribution of all columns in each cluster. Based on the variables that are significantly different across different clusters, study the unique characteristics of each cluster, and give an intuitive name to each cluster, so that you can quickly convey the cluster results to the medical team. For each cluster, make suggestions to various medical teams how they shall handle each cluster differently in the next steps e.g. follow up consultations, health checkup reminders etc. (10 marks) Question 2 Your next task is to predict whether the patient has diabetes, by building a Logistic Regression Model. You are predicting the ‘Outcome’ column, using all other columns as input variables. (a) Run the given code of Logistic Regression. Discuss the P-values and coefficients generated for two variables: ‘SkinThickness ’ and ‘Pregnancies ’. Explain in plain English the impact of these two input variables on the target variable Outcome.    (10 marks) (b) We define target variable utcome=1 as the positive class, i.e., the patient has diabetes. Explain in plain English what False Negative (FN) case and False Positive (FP) case are. Discuss which one, FN or FP, is worse and whether the predictive model of your hospital should be optimized for Precision or Recall.   (10 marks) (c) The model above uses a default threshold of 0.5 for diagnosing diabetes. Run the given code to try threshold from 0.1 to 0.9. As the threshold goes up from 0.1 up to 0.9, what do you observe about Precision and Recall? Based on the hospital’s goal of minimising misdiagnosed cases while ensuring timely intervention, suggest the best threshold and justify your choice. (10 marks)

$25.00 View

[SOLVED] 42904 CLOUD COMPUTING AND SOFTWARE AS A SERVICE ASSIGNMENT 3

CLOUD COMPUTING AND SOFTWARE AS A SERVICE 42904 ASSIGNMENT 3 Structure of this assignment: This   assessment   item   focuses    on   Amazon   Web   Services   (AWS)   application development. Weight of this assignment towards the overall subject grade: 35% of the overall subject grade. Submission deadline and requirements: Submission Deadline: The deliverables of this assignment are two-fold as follows: (a) Overview of the system architecture (Deliverable 1) (b) The developed system in AWS (Deliverable 2) Please note that you will be provided with a dedicatedAWS account for this assignment. All development and deployment tasks must be carried out within this specific account. Both the above deliverables are due on 2 June 2025 by 6 PM. Submission Requirements: The submission requirements for the two deliverables are as below: (a) Upload a soft copy of Deliverable 1 on Canvas by 2 June 2025 by 6 PM. (b) Deliverable 2 must be completed in the AWS account provided to you by 2 June 2025 by 6 PM. Objectives: This assignment is linked to the following Subject Level Objectives: (1), (2), and (4). Academic Standards: Please refer to the statement on academic conduct and the use of plagiarism detection software in the subject outline. Late Submission Policy: You must hand in and email the assignment on time. An extension may be granted for illness, misadventure, or other extenuating circumstances beyond your control. The issue of an extension should be raised with the Subject Coordinator as soon as possible after the circumstances occur. Extension will generally not be granted on or after the assignment's due date. Written consent in the form of an email should be obtained from the Subject Coordinator, allowing for late assignment submission. Please note that such permissions for late assignment submissions will only be considered due to prior unforeseen extraordinary and genuine circumstances beyond your control. Late assignments submitted outside of these parameters will be deducted one mark per day, and more than seven days late assignments (without any special consideration) will receive zero marks. Team/Group Registration: This is an individual assessment item. You are required to complete this assignment on your own. This assignment also requires a bit of independent research to complete. You are required to complete and complete this on your own. Assignment Description: Consider a small startup currently in its early stages of operation. Their setup comprises a LAMP stack (MySQL, Apache, and PHP) running on a single desktop PC in a small office. Like many early-stage startups, it expects significant, rapid, and unpredictable growth in the coming months. They want to move their offering to Amazon Web Services (AWS). As part of moving their current infrastructure to the cloud, they have requested a system architecture and implementation on AWS that addresses the following concerns: 1. Scalability:  The  application  must  be  able  to  scale  on  demand.  Given  the uncertainty around the timing and extent of future growth, the startup wants to avoid both over-provisioning and under-provisioning. 2. Disaster Recovery: The system must incorporate disaster recovery measures to maintain high performance and throughput and ensure continuous availability even under adverse conditions. Your task in this assignment: Design and deploy a scalable, elastic, highly available, and fault-tolerant architecture that supports the startup’s organic growth. This design should explicitly address the concerns  outlined  in  the  above  project  brief,  ensuring  it  meets  all  specified requirements. Assignment Deliverables: The deliverables of Assignment 3 are two-fold as below: (a) Deliverable 1 (AWS system architecture). Prepare a PDF document (limited to four or five pages) that clearly and concisely presents your proposed architecture diagram. Provide a justification for each Amazon Web Services (AWS) component you include, explaining how it supports the solution's requirements. Additionally, explicitly outline any assumptions made during the design process and list all AWS services used to implement the solution. You may use any diagramming tool of your choice to illustrate the system architecture. (b) Deliverable 2 (Develop the Application in AWS). Use the AWS account that has been provided to you to build and deploy the application. You may either: (i)      Leverage  AWS  Elastic  Beanstalk  to  configure  and  deploy  the application, (OR) (ii)     Configure  and  deploy  each  component  individually  using  AWS services. In either case, you are required to utilize the following AWS services: (a) AWS Beanstalk (b) Amazon EC2 (c) Custom AMI (Amazon Machine Image). (Please note that you are required to create your own custom AMI) (d) Custom Security groups allowing HTTP and SSH requests (All instances must use the same custom security group) (e) Load Balancer (f)  Auto Scaling (with a minimum of two instances and a maximum of eight instances). Set scaling triggers on network output traffic with an upper threshold of 60% and a lower threshold of 30%. (g) RDS (multi-availability zones deployed) (h) Custom Virtual Private Cloud (VPC) (with at least two subnets in different Availability zones). All subnets must be public. (i)  All instances must use the same custom key pairs. (j)  Set email notifications for important events in your environment (if using Elastic Beanstalk) Resources: 1.  Lab and lecture contents regarding AWS 2. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.html 3. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials. WebServerDB.CreateWebServer.html 4. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-register-lbs-with- asg.html#as-register-lbs-console 5. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html Assessment Process for Assignment 3: The following assessment criteria will be used in grading process for this assignment. Criteria Marks Comments and Marks procured System Architecture •    Does  the  developed  system architecture meet  the requirements outlined in the Assignment Description? •    Have relevant AWS services been used for addressing the requirements? 10 marks AWS System Development •    Does the developed AWS system address  meet the requirements outlined in the Assignment Description? 25 marks Total 35 marks

$25.00 View

[SOLVED] BUSAN 300 Data Wrangling Semester 1 2025

BUSAN 300: Data Wrangling Semester 1, 2025 Wrangling Project Specifications 1.    Purpose The purpose of the project is for you to achieve all learning outcomes of this course. The process of working through it is more valuable than the mark received. In attempting and completing this project correctly, you will: •    experience sourcing real, raw data •    refine your data wrangling skills •    encounter real data problems and employ the tools/techniques learned in this class and beyond to solve them •    improve your data-based problem-solving skills •   be patient when completing your work •    explore data in the context of important global issues such as sustainability and equity in society 1.1  Project Theme The overarching theme for the project is Supporting a Sustainable and Equitable Future through Data- Driven Citizenship with a focus on sustainability and just, ethical and equitable societies. This involves understanding how data can inform. practices and policies for a better future, and you are encouraged to explore datasets that reveal societal dynamics, biases, and inequalities. For example, projects under this theme might investigate how biases in data can lead to unintended consequences for certain minority groups and/or trends in sustainability. However, you may also choose to propose any theme and topic that you might find compelling to work on. We will review this during the project proposal phase described below. 2.    Guidelines 2.1  Submission Project Proposal: Submit a single document in PDF format containing your project proposal to Canvas and have it approved by the stated deadline. Attach links to data files (e.g. Google Drive) when needed. Follow the Project Proposal Template. Project Report: Submit a single document in PDF format containing all the contents of your project report to Canvas by the stated deadline (23:59 NZDT, Sunday, 15th  of June 2025). Attach links to data files (e.g. Google Drive) when needed. Follow the Project Report requirements stated in Section 3. 2.2  Weighting The Project Report submission is worth 15% of your final grade. 2.3  Academic honesty and integrity This project is an individual assessment. You must complete all the work yourself. Do not submit work that you did not produce. Do not work in a way which could result in parties producing the same or very similar work. In attempting this assignment you agree to adhere to all the principles and practices of academic honesty and integrity for the University of Auckland outlined here:https://www.auckland.ac.nz/en/students/forms-policies- and-guidelines/student-policies-and-guidelines/academic-integrity-copyright.html.   Any   form.    of   cheating, plagiarism, assistance in cheating, unfair collaboration, or other behaviour deemed to be academic misconduct will not be tolerated. Academic misconduct will be dealt with according to the University’sStudent Academic Conduct Statute. 3.    Tasks This wrangling project is an exploration of public data with an intention to discover insights of interest to New Zealand or a global audience. The project is split into two parts: project proposal and project report. You should follow the steps described in the following sections in the order that they are listed. 3.1  Project Proposal Have your project proposal approved by the stated deadline. Each student’s project should be sufficiently interesting (i.e. worth doing), doable, and be of similar complexity to other projects (to ensure fairness in marking and learning experience). To ensure your project meets this standard, please spend sufficient time to read the project requirements, explore potential datasets, and plan your project. Summarise your plan in a project proposal document which briefly states: 1.    The two datasets you will use in the project: o At least one dataset must be from the list provided under the Project Datasets page in Canvas, which have been specifically chosen to include the themes of ethics, equity, sustainability, and responsible citizenship. Use these datasets to form the basis for your project. We ask you to use at least one provided dataset to simulate a real working environment, where you may be tasked with analysing data on a particular initial topic and must then find additional resources to support your findings. o Explain how the two datasets interrelate and support the exploration of your chosen theme. Your second dataset does not have to directly relate to the given themes and exploration outside of these themes is fine, provided you can demonstrate how your chosen datasets align with the principles of data-driven citizenship and contribute to a sustainable, ethical and equitable future. o Clearly state the sources: what are the data, their file types and where do they come from? o Provide links to the source of the files. 2.    How you plan to combine the two datasets: o Consider what attribute(s) they have in common. o Consider what technique(s) you plan to use. 3.    The intended final format of your combined dataset (e.g. Excel, MongoDB, etc.) 4.    A backup plan for completing this project o Consider what you could do to avoid forfeiting all 15% of your final grade due to poor forethought or underestimation of the requirements of this project etc. o You should enact this plan if you are unable to successfully combine your two datasets by Week 11. Copy and use the template provided in the Project Proposal Template page in Canvas within a text editor and write no more than 300 words. Submit your proposal as a PDF file to the “Project Proposal” Canvas assignment and ensure it is approved by the stated deadline. You will receive feedback on your proposal. If your proposal is approved (marked as “complete” on Canvas) you should proceed with your proposed datasets. If your proposal is rejected (marked as “incomplete” on Canvas) you should meet with the teaching staff to discuss the reason for rejection, then revise andresubmit. You are encouraged to repeat this process until your proposal is approved, and hence it is recommended that you submit your initial proposal as early as possible. There are no marks for completing this task, but your proposal must be approved foryour project report tobe accepted and marked. 3.2  Project Report: Source and Audit Two (2) Datasets Source and audit two disparate datasets from separate sources. You will be required to combine the datasets into a single dataset, so ensure the chosen datasets are suitably related. Remember that the datasets will be used to attempt to discover insights of interest to the New Zealand or the global public. For learning purposes, your datasets must be: •    sufficiently different – in format/file type from one another, and •    sufficiently complex – complexity can arise from o large data size (e.g. tens of thousands of rows/instances, or tens of columns) o non-uniform data structures (e.g. the data is an amalgamation from multiple sources, or different attributes exist for different instances) o dirty data Summarise your sourcing and auditing activities in the report. See the requirements in the “Wrangling Details” section below. 3.3  Project Report: Pose Questions Pose three (3) meaningful questions that could only be answered if the two datasets were combined. These questions should  be  impossible  to  answer  if the  datasets  were  not  combined.  You  can  assume  any  problem  or situation/scenario under which these questions are posed. You will be required to attempt to answer the questions in your report, whether you obtain correct answers is not important as long as your attempt is sensible. Document your questions in the report. See the requirements in the “Project Summary” section and “Questions and Answers” section below. 3.4  Project Report: Combine Datasets to be Stored in a Single Format Wrangle your two datasets to clean and combine them into a single dataset. Store the combined dataset in a single data store (i.e. its “final storage format”). Data stores could be a file (e.g. Excel workbook) or a database (e.g. Microsoft SQL Server, Microsoft Access, SQLite, MongoDB). You must not use any file converter or automation tool to transform. data. If your datasets are too large for the tools used in this course to handle, you may use a sufficiently large subset of the data to build a proof-of-concept that the data can in fact be combined. Document your wrangling processes in the report. See the requirements in the “Wrangling Details” section below. 3.5  Project Report: Answer Questions Using your cleaned and combined dataset in its “final storage format”, attempt to answer the three questions you posed in any way you can. Answering is likely to involve the use of one or many of the following: PivotTables, XPath, MongoDB queries, visualisations, SQL queries etc. You are advised to keep your exploration simple. Bear in mind that the learning outcomes ofthis assessment relate to data wrangling process and technique but not statistical analysis or data mining. Document your answers in the report. See the requirements in the “Project Summary” section and “Questions and Answers” section below. 3.6  Project Report: Write Report Document the previous tasks in one report. Lay out your report in three sections in this order: 1.   Project Summary 2.   Wrangling Details 3.    Questions and Answers The requirements of each section are as follows. 3.7  Project Summary Write this section for a business audience (e.g. a senior business decision maker). Summarise your entire project. Include the three questions you posed, each with a short summary and discussion of your answer/conclusion for it. Provide some insight about your findings. 3.8  Wrangling Details Write this section for a “data expert” audience (e.g. a classmate). Detail the following wrangling processes in your report, specifically: For each dataset •    Its origin. Provide a direct link to the data, or if that is not possible, explain how you obtained the data. •    Its general characteristics. What format is it in; what is the structure; how many columns/fields; how many records? etc. •    An initial audit. What observations did you make about the data? Are there any obvious or potential problems that may have to be dealt with before combining it with the other dataset? Steps you performed to combine the datasets •    The level of detail should be so that any of your classmates can replicate the process. o Stating what you did is more important than stating how you did it. For example, rather than explaining what menu items you clicked on, it is sufficient to state that you “removed duplicates on the user id and timestamp fields”. o Include relevant screenshots of intermediate steps etc. •    State all transformations performed on each dataset individually before they were combined. •    State all transformations performed to combine the datasets. •    State the steps performed to store the combined dataset in its “final storage format”. Provide a link to your final combined data store so that it can be inspected. To do this, please upload your combined data store to a platform. such as Google Drive and set the viewing permissions to “Anyone with this link can view”. Please provide the link to your data store clearly in your project report. Marks will be deducted if the final data store is not provided or is inaccessible. 3.8.2      Questions and Answers Write this section for a “data expert” audience (e.g. a classmate). Detail how you used your combined dataset, in its “final storage format”, to reach the answers/conclusions to your questions. Specifically, for each question: •    State the question and your answer/conclusion. •    State the steps performed to produce the answer. Include what tools/software you used and what queries you executed etc. o The goal here is not so that the process can be replicated, but to show your answer/conclusion is sound or otherwise reasonable. o Even if no exact answer was found, the steps to reach that result still need tobe documented. You should also state what would be needed to reach an answer. o Include relevant screenshots of intermediate steps etc. Submit the report as a single PDF file to the “Project Report” Canvas assignment by the stated deadline. 4.    Marking Guide The experience of completing the project is intended to be more important than the results you produce. No template for the report will be provided to not limit anyone’s style. and creativity. The content and structure you are required to provide are explicitly given in Section 3. The table below summarises how your project will generally be marked. Where tasks are completed exceptionally well (demonstrated by its execution or documentation or both), marks for those tasks can compensate for tasks which are not completed as well. Criterion and Marks (out of 15) Project Summary: Appropriateness (2) •    The summary is written for a general business audience •    A business worker can read only this section and find out all they need to know about the project Project Summary: Accuracy (2) •    The summary is concise, descriptive, accurate, and free from writing errors •    The summary provides all the context necessary to understand and appreciate the project Wrangling Details: Data Sourcing and Auditing (2) •    The origin, characteristics, and initial audit is written for each dataset •    The characteristics and audit of the datasets are accurate and meaningful Wrangling Details: Data Combining and Storing (6) •     The   documented   process    to   transform.    and   combine    the   data    is   comprehensive,    accurate, meaningful/replicable, and free from writing errors Questions and Answers: Answering (2) •    The documented process to attempt to discover answers is logical and free from writing errors •    The answers and resulting conclusions are sound and reasonable   Questions and Answers: Overall Considerations (1) •    The project has sufficient complexity and appeal •    The work submitted is professional, including formatting, grammar etc.  

$25.00 View

[SOLVED] 161762 2025 Project

161.762 2025 Project Part I: Writen Report TempIate (Max 7 pages, excIuding references and appendix) 1. lntroducⅥon (4 Marks) •     What dataset are you using, and what is its real-world or business context? •     What are the main research quesons you aim to answer? •     Why are these quesons important from a business or analycal perspecve? 2. Data DescripⅥon (4 Marks) •     Number of observaons and variables; types of variables (numeric, categorical, etc.) •     Any cleaning, transformaons, or imputaons performed •     Summary stascs and 1–2 basic visualizaons 3. MuIⅥvariate Method 1 (e.g., PCA, Factor AnaIysis) (4 Marks) •     What is the method, and why did you choose it for your data and queson? •     What assumpons are involved? Were they met? •     Show a key visualizaon (e.g., scree plot, biplot). Walk the reader through it: o  What does it show? o  Which variables are most important? o  What does it reveal about the structure of your data? •     What business insight did you gain? 4. MuIⅥvariate Method 2 (e.g., CIuster AnaIysis) (4 Marks) •     Why is this method suitable? •     What were the key plots (e.g., dendrogram, cluster map)? • What defines each group/cluster in your data? •     Are the results interpretable and relevant? 5. MuIⅥvariate Method 3 (e.g., Discriminant AnaIysis, MDS, MANOVA, etc.) (4 Marks) •     Jusfy the method choice in your context •     Include at least one meaningful plot and walk the reader through its interpretaon •     What new or different insight did this method give compared to the others? 6. ComparaⅥve AnaIysis (4 Marks) •     How did the methods differ in what they revealed? •     Were their conclusions complementary, overlapping, or contradictory? •     Which method added the most value, and why? 7. What Didn’t Work (4 Marks) •     Briefly menon one thing you tried that didn’t produce useful results or that you revised •     What did this teach you about method selecon or assumpons? 8. ConcIusion (4 Marks) •     What are the main takeaways? •     What are the limitaons of your analysis? •     How can a business or analyst act on these insights? Appendix •     Addional plots (if you used them to make decisions), outputs, or reference tables not included above, Chat logs with LLMs Part 2: Video PresentaⅥon TempIate (Max 5 minutes) 18 Marks Use the following structure to guide your video recording. You may present slides or talk through your report using screen recording so ware (e.g., Zoom, OBS). 1. lntroducⅥon (30 seconds) •     State your dataset, business context, and research quesons 2. Method 1 VisuaIizaⅥon & lnterpretaⅥon (1 minute) •     Display one plot •     Explain what it shows, how you interpret it, and why it maters 3. Method 2 VisuaIizaⅥon & lnterpretaⅥon (1 minute) •     As above, focus on the second method 4. Method 3 VisuaIizaⅥon & lnterpretaⅥon (1 minute) •     As above, focus on the third method 5. ComparaⅥve lnsights (1 minute) •     Synthesize how the methods gave different or complementary insights 6. ConcIusion (30 seconds) •     What’s the single most important insight a business user should remember?

$25.00 View

[SOLVED] Tutorial 3 Logit Model

Tutorial 3 Logit Model Preparation § Go through the lecture slides Tutorial Objectives: § To conduct binary logit model, and interpret the outputs § To conduct multinomial logit model, and interpret the outputs   Assume that you are hired by LinkedIn to investigate issues related to the continued use of their platform. Work with the “LinkedIn Continued Use” data mentioned in the lecture. You need to conduct the following tests in Python to answer relevant questions. Part 1: Binary logit model § What might influence people’s continued use of LinkedIn (i.e., stay or leave)? § Run a binary logit model following the instructions on lecture slides. § Generate the coefficients and p values for IVs. § Generate the model performance measure. § You should be able to replicate the results on lecture slides. § Interpret the outputs. § Write down your result explanations and recommendations to LinkedIn on notebook and submit the IPYNB file via Canvas (one copy per student). Part 2: Multinomial logit model § What might influence people’s continued use of LinkedIn (7-point scale – intention to continue)? ü Conduct multinomial logit test following the instructions on lecture slides. ü Generate the coefficients for the IVs. ü Generate the model performance measure. ü You should be able to replicate the results on lecture slides. ü Interpret the outputs § Upload the IPYNB output onto Canvas (one copy per student).  

$25.00 View

[SOLVED] COMP 2131 Introduction To Computer Systems Fall 2020 Processing

COMP 2131: Introduction To Computer Systems Fall 2020 Introduction Welcome to COMP 2131, Introduction to computer Systems, a three-credit course that can be applied toward a Thompson Rivers University (TRU) credential. The Course Guide contains important information about the course structure, learning materials, and expectations for completing the course requirements. It also provides information about how and when to contact your Open Learning Faculty Member, an expert in the course content, who will guide you through the course. Take some time to read through the Course Guide to familiarize yourself with what you need to do to successfully complete your course. Before you begin your coursework, it also is a good idea to read the Student Handbook, available at http://www.tru.ca/assets/ol/ebooks/ol st handbook/. The Handbook provides information about key policies and procedures, such as course withdrawals and cancellations, and how to schedule final exams. It also includes information about the various student services available to you, and telephone, email, and website contact details. If you have any questions, please feel free to contact your Open Learning Faculty Member. We hope you enjoy the course. Course Description Students leam the basic concepts of computer systems. Students are introduced to the concepts of computer architecture, thel C' and assembly programming languages as well as the use of Linux operating system. Students leam about memory organization, data representation, and addressing. Students are introduced to the concepts of machine language, memory, caches, virtual memory, linkage and assembler construction as well as exceptions and processes. Recommended Requisites: COMP 1231 with a score of C or better; or COMP 1230 with a score of C or better; or COMP 2120 with a score of C or better; Exclusion COMP 2130 Learning Outcomes Upon completion of this course, students will be able to: 1. Describe the fundamentals of computer architecture 2. Write programs with the powerful C programming language 3. Demonstrate programming through assembly language 4. Explain the critical relationship between programming and computer architecture 5. Demonstrate efficient programming through code optimization

$25.00 View

[SOLVED] MARK205 Marketing Research and Consumer Insights Autumn 2025 Web

Faculty of Business and Law School of Business MARK205: Marketing Research and Consumer Insights Subject Outline 6 credit points Subject Information Autumn, 2025 Section A: General Information Learning Outcomes Student Learning Outcomes On successful completion of this subject, students will be able to: 1.    Critically explain how marketing research is conducted at academic level, and translated into practical knowledge 2.    Define a research question and determine the sample required to investigate the research question 3.    Identify and address ethical issues in marketing research 4.    Demonstrate independent secondary research skills for collection and analysis of research data, and presentation of research findings 5.    Design qualitative and quantitative research tasks, and understand the process of implementing research tasks 6.    Analyse and interpret research data using introductory data-analysis techniques 7.    Critically analyses marketing research reports Subject Description Marketing research is the function that connects consumers and other relevant stakeholders to marketers through information that supports decision-making. Marketing research assists in the systematic and objective identification of marketing problems and opportunities, designs and implements the method for collecting information, analyses the results, and disseminates the findings and their implications. Failure to engage in marketing research activity leads to disadvantages in the competitive marketplace. Introductory Marketing Research will focus on the practice of marketing research by integrating theory and application. The subject includes the research process from problem definition to communicating the results and exposes the students to introductory qualitative and quantitative data analysis techniques. Course Learning Outcomes Course Learning Outcomes can be found in theCourse Handbook. Learning Platform Learning Platform. (Moodle) Subject Site The University's Learning Platform. uses Moodle as its Learning Management System, providing access to course materials, activities, and other Learning Platform. systems. The Learning Platform (Moodle) subject site can be accessed via your SOLS page Foundational Work Integrated Learning This subject contains elements of 'Foundational WIL'. Students in this subject will observe, explore or reflect on possible career pathways or a work-related aspect of their discipline. Major Text(s) Burns, A. C. & Veeck, A. (2020). Marketing Research, 9th Edition. Pearson. ISBN: 9781292318042 Estimated Price at UOW UNISHOP: $112.95 Textbook details are available online from the University Bookshop at https://unishop.uow.edu.au/ Key References The recommended readings below are not intended as an exhaustive list of references. Students should also use the library catalogue and relevant databases to locate additional resources. Textbooks that look at particular methods (e.g., interviewing, questionnaire design), analysis methods (e.g., t- tests, linear regression), and analysis software (e.g., Nvivo, SPSS) are available if you require more depth in a particular topic. However, these are too numerous to list. If you want more depth in general, then I would recommend consulting a more comprehensive Marketing Research textbook (Business Research Methods textbooks might also be consulted, but tend to be more general). In the UOW library, these include, but are not limited to: •    Bradley, N 2013, Marketing Research: Tools & Techniques, Oxford: Oxford University Press. •    Malhotra, NK 2015, Essentials of Marketing Research: A Hands-On Orientation, Global Edition, Pearson. •    Feinberg, F, Kinnear, T, & Taylor, J 2013, Modern Marketing Research: Concepts, Methods, And Cases, Mason, OH: Cengage Learning. •    Hair, J, Bush, R, & Ortinau, D 2009, Marketing Research: In A Digital Information Environment, Boston: McGraw-Hill Irwin. •    Malhotra, NK 2010, Marketing Research: An Applied Orientation, Upper Saddle River, N.J.; London: Pearson Education. •    McDaniel, C, & Gates, R 2007, Marketing Research, Hoboken, N.J.: Wiley. •    Zikmund, W, & Babin, B 2010, Exploring Marketing Research, Mason, Ohio; Australia: South- Western/Cengage Learning. Peer-reviewed academic journals can be a useful source of methodological advances. The journals that are more likely to include research methods discussions include (but are not limited to): •     Journal of Marketing Research •     Journal of Consumer Research •     Journal of Marketing •     Journal of the Academy of Marketing Science •     International Journal of Research in Marketing •     Journal of Business Research •     Australasian Marketing Journal •     Academy of Management Journal •     Journal of Applied Psychology •     Organisational Research Methods

$25.00 View

[SOLVED] STATS 779 Professional Skills for Statisticians 2019

Department of Statistics STATS 779: Professional Skills for Statisticians Test:  May 29, 2019 4:00 pm—8:00 pm. INSTRUCTIONS *  Total marks = 70. *  Attempt all questions. *  Note:  Some questions are open-ended and it may not be clear how extensive your answer should be.  Do not write long answers to these questions.  You should be able to answer any question of this type in a few paragraphs at most, or within half a page. 1 Write complete LATEX code to reproduce the slides given in Figure 1. Figure 1: Beamer slides. Tips: •  The slides use Warsaw and rectangles for presentation and inner themes, respectively. •  Use the institute command to add institution details. •  Specify the word fragile as a frame option to display verbatim text in a frame. •  Create a Verbatim environment using the fancyvrb package to display the LATEX code shown in the right-most block of Figure 1.                 [12 marks] 2 Write an R code chunk in a knitr document to reproduce Figure 2 (including the caption).    Note: Both plots should be displayed next to each other in the compiled document. [5 marks] Figure 2: Histograms of the speed and dist variables in the cars dataset. 3 Write the YAML header, R code chunk and inline codes to reproduce the output file given in Figure 3. Note: course.df is a data set provided in the s20x package. The columns in the data frame are shown in the following output: ' data.frame ' :  146  obs .  of    15  variables: $  Grade           :  Factor  w/  4  levels  "A","B","C","D":  3  2  1  1  4  1  4  4  3  3  . . . $  Pass             :  Factor  w/  2  levels  "No","Yes":  2  2  2  2  1  2  1  1  2  2   . . . $  Exam              :  int    42  58  81  86  35  72  42  25  36  48   . . . $  Degree          :  Factor  w/  4  levels  "BA","BCom","BSc", . .:  3  2  4  4  4  2  3  1  2  2   . . . $  Gender          :  Factor  w/  2  levels  "Female","Male":  2  1  1  1  2  1  1  2  1  1   . . . $  Attend          :  Factor  w/  2  levels  "No","Yes":  2  2  2  2  1  2  2  1  2  2   . . . $  Assign          :  num     17 .2  17 .2  17 .2  19 .6  8  18.4  14.4  8.8  17 .6  12   . . . $  Test              :  num    9 .1  13 .6  14 .5  19 .1  8 .2  12 .7  7.3  10 .9  10 .9  9 .1   . . . $  B                     :  int    5  12  14  15  4  15  4  3  10  8   . . . $  C                    :  int    13  12  17  17  1  17  14  0  4  8   . . . $  MC                  :  int    12  17  25  27  15  20  12  11  11  16   . . . $  Colour         :  Factor  w/  4  levels  "Blue","Green", . .:  1  4  1  4  1  1  2  4  2  3  . . . $  Stage1          :  Factor  w/  3  levels  "A","B","C":  3  1  1  1  3  1  3  3  2  2   . . . $  Years .Since:  num    2 .5  2  3  0  3  1 .5  0 .5  1 .5  2 .5  4   . . . $  Repeat          :  Factor  w/  2  levels  "No","Yes":  2  1  1  1  1  1  1  1  1  1   . . . Figure 3:  R markdown output. [8 marks] 4 Write bibTEX entries to be included in a  .bib file to produce the following bibliography: References K. Aas and I. Hobæk Haff. The generalised hyperbolic skew Student’s t-distribution. Journal of Financial Econometrics, 4(2):275–309, 2006. A. Azzalini. R package sn:  The skew-normal and skew-t distributions (version 0.4-2). Università di Padova, Italia, 2006. URL http://azzalini.stat.unipd.it/SN. D. J. Bartholomew.   Stochastic  Models for  Social Processes.   Wiley,  London,  2nd edition, 1973.                  [11 marks] 5  The ToothGrowth dataset comes with base R. It is described as follows: The response is the length of odontoblasts (cells responsible for tooth growth) in 60 guinea pigs. Each animal received one of three dose levels of vitamin C (0.5, 1, and 2 mg/day) by one of two delivery methods, orange juice or ascorbic acid (a form of vitamin C and coded as VC). The columns in the data frame are shown in the following output: >   str ( ToothGrowth ) ' data . frame. ' :       60   obs .   of     3   variables :   $   len   :   num     4 . 2   11 . 5   7 . 3   5 . 8   6 .4   10   11 . 2    11 . 2   5 . 2   7   . . .   $   supp :   Factor  w /  2   levels   " OJ " ," VC " :  2  2  2  2  2  2  2  2  2  2   . . .    $   dose :  num     0 . 5   0 . 5   0 . 5   0 . 5   0 . 5   0 . 5   0 . 5   0 . 5   0 . 5   0 . 5    . . . Write R code to produce Figure 4 using ggplot2. Note: To centre a title in ggplot2 use theme(plot.title  =  element_text(hjust  =  0.5))                                [10 marks] 6  For each of the following SELECT statement pairs, explain why the results are either different or the same. Figure 4: Boxplots of Tooth Growth [8 marks] 7  Details of passengers and crew who sailed on the Titanic are contained in the  .csv file titanic .csv. The column names and format of the column entries is shown in Figure 5 Figure 5: Top of titanic .csv The passenger name can be very long, up to 90 characters because wives’ names include their husband’s name. The fare is in pounds, to 4 decimal places.  The orginal fare was in pounds, shillings and pence which explains the strange fractions in the fare values. Note that underscores are permitted in column names in MySQL although it is generally rec- ommended not to use underscores. You may use them in this example. a Write MySQL code to create a table called titanic for this data set.  Do not create an automatically incremented variable as the primary key for the data.  Instead specify the passenger name as the primary key. b Write MySQL code to read the data from titanic .csv into the table titanic. c  Write  MySQL  code  to  produce  a  table  showing  the  average  fare  by  passenger  class (Pclass), rounded to 1 decimal place. d Write MySQL code to add a column to the titanic table which is of type DATE, named DateOfBirth, which can be NULL. e Mr .    Owen  Harris  Braund was born on March 30, 1880. Write MySQL code to enter his date of birth in the column DateOfBirth.                            [16 marks]

$25.00 View

[SOLVED] ACCT6003 Fundamental Analysis for Equity Investment Week 9 Assignment on Reformulation and ReOI v

ACCT6003 Fundamental Analysis for Equity Investment Week 9 Assignment on Reformulation and ReOI valuation For this assignment you are required to reformulate the 2024 and 2023 statements of Vista Group International Limited (ASX ticker VGL) and then perform a Residual Operating Income (ReOI) valuation exercise using the reformulated statements and the following model assumptions: •   An analyst claims that VGL’s annual WACC is 6%. •   The analyst also claims that the residual operating income will grow in perpetuity after 2024 by 3% per annum. •    The ASX price of VGL as at year end 31 December 2024 was $2.84 per share. All the information you need to complete the reformulation can be found in VGL’s published 2024 annual report. On Canvas, you are also provided with an Excel template to complete this assignment. Submit for assessment only the completed Excel spreadsheet. Do not submit a report. Required: (1) Using the provided  Excel template, for 2023 and 2024, you are required to reformulate VGL’s consolidated Income Statement together with the Statement of Other Comprehensive Income in the worksheet named ‘Q1 IS reformulation’, and also reformulate the Consolidated Statement of Financial Position in the worksheet named   ‘Q1 BS reformulation’. Follow the reformulation    steps presented in the Week 8 lecture. The reformulated balance sheet must satisfy the accounting identity of CSE = NOA – NFL – MI or CSE = NOA + NFA – MI, and the reformulated income statement must satisfy the accounting identity CI = OI – NFE – MIC or CI = OI + NFI – MIC, where MI is minority interest on the balance sheet MIC is the minority interest component on the income statement. Note that Minority Interest is also known as Non-Controlling Interest. You are also required to check that total Operating Assets plus Total Financial assets is equal to Total Assets as reported in the published Balance Sheet. The same for total operating Liabilities plus total Financial Liabilities being equal to Total Liabilities as reported.  You are required to perform. these checks in the reformulated statements as indicated in the Excel template.  You are required to discover information from the annual reports to produce a detailed breakdown analysis of the components of operating items and reference the source of information from the annual reports where you discovered the breakdown. [4 marks] (2) In the Excel worksheet named ‘Q2 ReOI,, you are required to apply the Residual Operating Income (ReOI) valuation model to estimate the intrinsic equity price per share for VGL as at 31 December 2024, as described in the Week 8 lecture notes. [1 mark] (3) Assume that there is no available information for forming any finite horizon forecasts and that all analysts in the market rely entirely on forecasting only a terminal value at the year-end of 2024 to estimate the intrinsic price of equity. If there is no available information, then how can the analyst explain the difference between the estimated intrinsic price per share from Q2 and the actual price of VGL trading on the ASX at $2.84?  What information can the analyst use to explain the difference and make sense of the difference. In less than 200 words, you are required to record your answer to this question in the worksheet named ‘Q3 Discussion ,. [1 mark]

$25.00 View

[SOLVED] PIP Control Session

PIP Control – Matlab Session Introduction: You will be using a simple model and testing the design of PIP by Pole-placement and Linear Quadratic Control. Then you will apply the same process to design a controller for a “real” motor modelled in Simulink. Exercise Aim: To practice the design of PIP control systems using matlab. How to work: Work either individually or in pairs and keep rough notes for your own future reference in your notebook. The Tasks to be undertaken Task 1: First, create a directory in which to work and down load the files for this exercise. 1.    Open Ex3.m in the editor. Follow the design process in matlab and run the file. See if you can swap the pole locations and what difference this makes. 2.    Modify Ex 3 to do the design by Linear Quadratic Control. Initially use default values for the weights then try modifying them and see what the effect is. Task 2: DC Motor Control Now you will use your model of the motor obtained in Lab 2 to design a PIP Controller and test it on the complete motor system. The design specification for the closed-loop are given below. Motor Control Specification Time response: -     Type-1 Servo performance (the error should be zero in the steady-state) -     For a step input of 100 rad/s the performance must meet: Rise time > load my_motor_model.mat Form. the nmssform, design a controller for the motor with LQ and/or Pole- placement. a.   Test it on the TF model (Motor_discrete.slk) b.   Test it on the “real” motor model (motor_real.slk) * Note: I have provided an emergency model roger_possible_motor_model.mat for use if your system identification exercise was a complete disaster or you have forgotten to bring your model. However, this model is not the “correct” one it is just one of several possible models. So stick with your own if you have one. Deliverables: The results will be discussed during the exercise (with support from tutors). The results will be part of the assessed coursework report that you will write (see separate coursework report guidance). Once you have successfully completed this Lab, the coursework will be straightforward (i.e. the PIP control design part of it is done, apart from generating graphs, diagrams and reporting). Files:  

$25.00 View

[SOLVED] EC371 midterm review

EC371 midterm review (ch 3, 6, 8, 9.1-9.4 + study of the value of Lumpinee Park in Bangkok). Note: Midterm HELP SHEET is encouraged: handwritten only; one sheet of paper of any size; can be two-sided. Check if you feel comfortable explaining the concepts from the exam outline on blackboard under the midterm materials folder. Take the online practice test on blackboard (not for a grade). Chapter 3 1. Pollution: internalizing negative externalities Students should understand the potential and the limitations of internalizing externalities. An important point is that all externalities cannot be internalized, and to be able to differentiate between instances where externalities can and can’t be internalized (examples?). Note: pollution externalities will be difficult to internalize fully when transactions costs are high or when estimates of social costs and benefits are unreliable. The idea of an “optimal” pollution level is based on the idea that reducing pollution to zero is an unrealistic idea.  Students should realize that eliminating pollution to zero carries significant opportunity costs. From there, a discussion of the costs of eliminating pollution plays the dominant role in defining the “optimal” pollution. 2. Pigouvian tax as a primary policy to combat market failure associated with pollution. Tax per unit versus a percentage tax – which one is a Pigouvian tax and why?  What does it mean that the Pigouvian tax is imposed upstream the production process? 3. Coase Theorem: critical assumptions a) The Coase Theorem can be illustrated using MC of pollution for a downstream farmer or community while MB of pollution could represent a factory owner. Set up the graphical scenario and go through negotiation to reach a mutually beneficial agreement. b) The discussion of the free rider effect in the context of pollution externalities and negotiations. Examples of the free rider issues observed in relation to many environmental issues such as community negotiations against factory emissions. c) While the distribution of property rights does not affect efficiency when applying the Coase Theorem, rights are relevant in considering equity.  The issue of equity plays critically important role in the applications of Coase’s bargaining approach. For example, who should have the property rights in specific cases such as a factory polluting a neighborhood, a driver responsible for automobile emissions, and a developer seeking to build houses on a wildlife habitat. 4) Positive externalities: amenities and why having an amenity can be associated with market failure (i.e., market equilibrium is not efficient when positive externality is present). Explain how a subsidy can increase economic efficiency in the case of a positive externality.  Use a supply-and-demand graph to support your explanation. Chapter 6: 1. Types of value: use, nonuse, option. Students should understand the difference between use, non-use values and be able to provide an example of each 2. Types of non-market valuation methods. Example of each method application to Lumpinee park. 3. Externalities: the effect on different types of values. Example of true/false question with a comment: Externalities may involve changes to both use and nonuse values. Is it true or false? Answer: … .. Comment: ….. Chapter 8: 1. Types of pollution: taxonomies of pollutants. Students should understand the potential and the limitations of each pollution abatement policy considered in chapter 8 (e.g., tax, cap&trade, standards, zoning) 2. Types of pollution abatement policies and their characteristics (cost-effectiveness, flexibility, accuracy in achieving a specific pollution reduction goal). Students are encouraged to put table from chapter 8 in their help sheets. Examples of multiple-choice questions (students will be asked to provide a brief comment): 1. As emissions levels stabilize for a stock pollutant ... a)          health and other impacts also stabilize. b)         health and other impacts begin to decrease. c) health and other impacts continue to increase. d)         health and other impacts may increase or decrease. e)          health and other impacts are reduced to zero. 2. As emissions levels increase at a steady rate for a stock pollutant, accumulations ... a)          decrease at a steady rate. b)         decrease at an exponential rate. c)          remain constant. d)          increase at a steady rate. e)          increase at an exponential rate. 3. Which of the following is an example of a global pollutant? a)          Chlorofluorocarbons (CFC’s) b)         Ground Ozone (O3) c)          Carbon monoxide (CO) d)         Nitrogen oxides (NOx) e)          Sulfur dioxide (SO2) 4. A pollution tax would be preferable to a system of transferable permits when... a)          The marginal costs of damage are steep and the marginal costs of control are relatively stable b)         The marginal costs of damage are steep and the marginal costs of control are steep c)          The marginal costs of damage are relatively stable and the marginal costs of control are relatively stable d)         The marginal costs of damage are relatively stable and the marginal costs of control are steep e)          The marginal costs of damage are elastic and the marginal costs of control are elastic 5.  Which one of the following statements is false? a)          Technological progress will lower the price of tradable pollution permits. b)         Under a pollution tax, technological progress will reduce overall pollution levels. c)          Under a system of tradable permits, technological progress will reduce overall pollution levels. d)         A pollution tax creates an incentive for technological progress. e)          Under a system of tradable permits, technological progress can result in some firms increasing their pollution levels. 6. Tradable permits are likely to result in less inefficiency, relative to a pollution tax, when ... a)          the marginal costs of damages are steep and the marginal costs of pollution reduction are relatively stable. b)         the marginal costs of damages are steep and the marginal costs of pollution reduction are steep. c)          the marginal costs of damages are relatively stable and the marginal costs of pollution reduction are relatively stable. d)        the marginal costs of damages are relatively stable and the marginal costs of pollution reduction are steep. e)          the marginal costs of damage are elastic and the marginal costs of pollution reduction are also elastic. For Questions 7-9 refer to the graph below. 7. The graph above shows the marginal pollution control costs per ton for a firm that would pollute at Qmax without any regulation.  Suppose a pollution tax of T1 per ton were implemented, with the firm reducing pollution to Qtax .  What area(s) would represent(s) the tax paid by the firm? a)          A b)         A+B c)          B d)         A+B+C e)          B+C 8. The graph above shows the marginal pollution control costs per ton for a firm that would pollute at Qmax without any regulation.  Suppose a pollution tax of T1 per ton were implemented, with the firm reducing pollution to Qtax .  What area(s) would represent(s) the firm’s pollution reduction costs, not considering the taxes it pays? a)          A+B b)         A+B+C c)          B+C d)          C e)          D 9. The graph above shows the marginal pollution control costs per ton for a firm that would pollute at Qmax without any regulation.  Suppose a pollution tax ofT1  per ton were implemented but the firm did not reduce its pollution from Qmax .  What area(s) would represent(s) the tax paid by the firm? a)          C+D b)         B+C+D c)          B d)         B+C e)          A+B+C+D For Questions 10-11 refer to the graph below. 10. The graph above shows the marginal pollution control costs per ton for a firm.  Suppose a pollution tax of T1 per ton were implemented, with the firm reducing pollution from 100 to 30 tons.  What area(s) would represent(s) the tax paid by the firm? a)          A b)         A+B c)          B d)          C e)          C+D 11. The graph above shows the marginal pollution control costs per ton for a firm.  Suppose a pollution  tax ofT1 per ton were implemented, with the firm reducing pollution from 100 to 30 tons.  What area(s) would represent(s) the firm’s pollution control costs, not considering the tax it pays? a)          A b)         A+B c)          B d)         B+C e)          B+C+D Chapter 9.1 – 9.4: Principles of ecological economics. Note that the boundary between traditional natural resource and environmental economics and ecological economics is not well-defined.  One of the major differences between traditional and ecological economics is the emphasis on macroeconomic scale.  Ecological economists generally believe that market-based measures are insufficient to limit scale to optimal levels and explicit measures are required. Ecological economics stresses a systems perspective, and attempts to understand economic activity in terms of the physical and biological principles of ecosystems. But this does not necessarily mean that traditional environmental economics does not utilize biological principles in the economic models (we will consider the model of fisheries in chapter 18, which combines marine biology and economics in one model). Ecological economists do not entirely deviate from traditional analysis. Many aspects of ecological economics are based on the extension of mainstream economic theory to the areas of sustainable economic development. The concept of substitutability of natural and human-made capital and alternative concept of complementarity of natural and human-made capital The difference between strong (does not allow for any kind of substitution between natural and human-made capital, and therefore, natural capital levels should be maintained) and weak (allow for some substitution between natural and human-made capital and therefore natural capital depletion is justified as long as it is compensated for with increases in human-made capital) concepts of sustainability. Examples of multiple-choice questions: 1. Which one of the following is not an example of natural capital? a)          The ozone layer b)         A river c)          A road d)          Soil e)          Air 2. The process of adding to productive capital over time is referred to as ... a)          capital depreciation. b)         net disinvestment. c)          physical accounting. d)         net investment. e)          strong sustainability. 3. Which one of the following statements is false? a)          The economically optimal level of pollution based on standard environmental economics is always less than the absorptive capacity of the environment. b)          Standard economic theory generally assumes substitutability between resources. c)          Satellite accounts can indicate the abundance or scarcity of natural resources over time. d)         Cutting down forests is an example of natural capital depreciation. e)          Complementarity suggests that both natural and manufactured capital are necessary for production. 4. The combined input and waste flows in a system is called ... a)          absorptive capacity. b)          scale. c)          throughput. d)          strong sustainability. e)          weak sustainability. 5. The human economic system is generally considered to be a(n) ... a)          closed system b)          open system c)          strong system d)         weak system e)          optimal system 6 NOT ON THE EXAM I DISAGREE. The global ecosystem (except for solar energy) is generally considered to be a(n) ... a)          closed system b)          open system c)          strong system d)         weak system e)          optimal system 7. Which one of the following economists has argued that the ecosystem imposes scale limits on the human economic system? a)          Herman Daly b)         John Hicks c)          David Ricardo d)         Adam Smith e)          John Maynard Keynes 8. Which one of the following statements is true? a)          Strong sustainability assumes that most resources can be substituted for one another. b)          Strong sustainability is easier to achieve that weak sustainability. c)          Instituting strong sustainability would require extensive government intervention in markets. d)          Strong sustainability is aligned with traditional economic theory. e)          Strong sustainability dictates that no non-renewable resources be used. 9. Which one of the following statements about sustainability is false? a)          Weak sustainability could allow a country to reduce its forest cover over time. b)         Weak sustainability assumes that human capital can substitute for natural capital. c)          Strong sustainability seeks to maximize human welfare over time. d)          Strong sustainability would maintain natural capital stocks. e)          Weak sustainability would require government intervention. 10. How does humanity’s current ecological footprint compare with the available land on the planet? a)          Humanity’s ecological footprint is about 25% of the available land. b)         Humanity’s ecological footprint is about 85% of the available land. c)          Humanity’s ecological footprint is about 120% of the available land. d)         Humanity’s ecological footprint is about 165% of the available land. e)          Humanity’s ecological footprint is about 260% of the available land. 11. Standard economic theory generally assumes that ... a)          humans are in a “full-world” stage of development. b)         there is a scale limit to the growth of the economic system. c)          natural capital depreciation needs to be incorporated into national accounts. d)         throughput should be minimized. e)          resources are substitutable. 12.  What type of sustainability is generally associated with standard economic theory? a)          Open sustainability b)         Closed sustainability c)          Weak sustainability d)          Strong sustainability e)          Optimal sustainability Examples of essay questions: 1)    What is natural capital depreciation?  List and briefly discuss three ways to incorporate natural capital depreciation into national accounting and policymaking. 2)    What is meant by the “optimal scale of the macroeconomy”?  How do traditional and ecological economics differ on the issue of macroeconomic scale? 3)    Do you agree that global ecosystem represents a closed system? Example of a multiple-choice question that will not be on the exam (why?): What comprises the largest share of humanity’s ecological footprint? a)          Agriculture b)          Forestry c)          Fishing d)          Urban development e)          Carbon emissions

$25.00 View

[SOLVED] Benchmark and Comparison of State-of-the-Art Ontology and Vocabulary Repositories for Social Scie

Bachelor Project Computer Science Benchmark and Comparison of State-of-the-Art Ontology and Vocabulary Repositories for Social Sciences and Humanities Abstract The increasing adoption of the Semantic Web in the Social Sciences and Hu- manities (SSH) has led to the development of numerous ontology and vocabu- lary repositories. These repositories serve as crucial resources for structuring, sharing, and reusing domain  knowledge.  This work provides a benchmark and comparative analysis of leading repositories, evaluating their scope, ac- cessibility, interoperability, and usability.  By analyzing platforms such as the Ontology Lookup Service (OLS), BioPortal, the Social Science Thesaurus, and other domain-specific repositories, we assess their relevance for SSH research. The purpose of the study is to guide students and researchers in selecting the most appropriate repository for their work.  Additionally, a practical im-plementation  proposal  for a bachelor’s  dissertation is outlined, focusing  on ontology evaluation and integration within an SSH research framework. 1 Introduction The Semantic Web has significantly influenced knowledge management and data inte- gration in various disciplines, including Social Sciences and Humanities[3].   The use of ontologies and controlled vocabularies facilitates semantic interoperability, making repos- itories essential tools for researchers.  However, with numerous available repositories, a comparative analysis is necessary to determine the most suitable for SSH applications[10]. Key Terms: • Semantic Web • Benchmark • Knowledge Management •  Ontology repository •  Social Sciences and Humanities (SSH) 2 Background With the increasing usage of the Sematic Web in the Social Sciences and Humanities (SSH), ontology and vocabulary repositories developed rapidly.  These ontologies and repositories are important resources for structuring, sharing, and reusing domain knowledge, enabling researchers to work with well-organized and machine-readable data.   However, there is limited guidance for evaluating these repositories in terms of scope, accessibility, inter- operability, and usability, especially in the field of SSH. The comparison and benchmark analysis of the ontology and repositories can help researchers navigate and apply these tools effectively in SSH contexts. • Benchmark: A systematic comparison and evaluation of tools or systems based on defined criteria to determine their performance or suitability • Ontology:  A structured representation of knowledge within a domain,  defining concepts and the relationships between them to enable semantic understanding. • Vocabulary Repositories: Platforms that store and provide access to controlled vocabularies or ontologies, facilitating data organization, retrieval, and reuse. • Social Sciences and Humanities:Academic disciplines focused on human society, behavior, history, language, and culture, often involving qualitative or complex data. 3 Problem Ontology and vocabulary repositories are important for managing knowledge in the field of Social Sciences and Humanities (SSH). However, a great number of these resources were initially designed for structured, scientific domains; They are not fully suitable to specific need from SSH research. Therefore, researchers working in the SSH field will always face the difficulties in deciding which repositories are the most suitable,  particularly when considering the subject’s scope and coverage, interoperability, usability and integration with research tools such as RDF and SPARQL. In addition, integration of multiple ontologies, using RDF and SPARQL, is necessary for SSH because existing repositories might lack structural flexibility and user-friendly design.  Integration is important because it can significantly improve the general scope and coverage, semantic consistency, and interoperability of ontological resources used in SSH research. This research will solve these problems by providing comparisons of ontology and vo- cabulary repositories,  assess key evaluation criteria,  and provide  SSH researchers with practical guidance. 4    Related Work The paper [8] presents the evaluations of scope, accessibility, interoperability, and usability to support a benchmark and comparative analysis of ontology and vocabulary repositories, hence improving the semantic consistency, interoperability, and usability and accessibility in the field of Social Sciences and Humanities. This paper[11] presents the evaluations of coverage  and completeness,  usability  and accessibility, and interoperability, offering a structured benchmark of ontology and vo- cabulary repositories to support selection of semantic resources in Social Sciences and Humanities research. This paper[2] directly provides a structured benchmark and comparative analysis of ontology and vocabulary repositories, evaluating their coverage, usability, interoperability, and relevance for supporting Semantic Web applications in Social Sciences and Humanities research. This paper[1] relates to the topic by showing how ontologies can be used to organize and present SSH knowledge clearly, which is also helpful to explain why good ontology repositories are important for education and research. This paper[13] shows the importance of interoperable infrastructures and FAIR data principles in SSH, supporting the need to benchmark ontology and vocabulary repositories to improve interoperability and usability. 5 Research Question(s) •  How do leading ontology repositories compare in terms of scope and coverage, in- teroperability, usability, community support, and integration with research tools for Social Sciences and Humanities (SSH) research ? •  How can multiple ontologies be integrated into an SSH research framework using RDF and SPARQL to enhance knowledge management and data integration? • What improvements can be made to the structure and usability of existing SSH vocabulary repositories to better serve the needs of researchers? 6 Approach This research follows a structured four-step methodology to benchmark and compare lead- ing ontology and vocabulary repositories for SSH research: 1. Ontology and Vocabulary Repositories Selection: Ontology and vocabulary repositories provide structured knowledge representations that enhance data discov- ery and integration. The most notable repositories were chosen: • Ontology Lookup Service (OLS) – A service aggregating ontologies across multiple domains[4]. • BioPortal –  Originally focused on biomedical ontologies but expanding to social sciences[9]. • LOV (Linked Open Vocabularies) – A repository for linked data vocabularies[12]. • Social Science Thesaurus – A specialized vocabulary for social science research[5]. • BARTOC (Basic Register of Thesauri, Ontologies & Classifications) – A catalog of knowledge organization systems[7]. • CLARIAH FAIR Vocabulary Registry – A registry of vocabularies from domains used in the Social Sciences and Humanities (SSH) 6.1    Evaluation Based on Benchmarking Criteria: To evaluate these repositories, the following criteria are considered: • Scope and Coverage – The breadth of subjects covered within SSH. • Interoperability – Compatibility with linked data and Semantic Web technologies[6] • Usability – The user interface and ease of access for non-technical researchers. • Community Support and Maintenance – Frequency of updates and com- munity engagement. • Integration with Research Tools – Compatibility with RDF, SPARQL, and data visualization tools. Repository Coverage (SSH) Interoperability Usability Sustainability Community En LOV Medium High High High High BioPortal Low High High High Medium FAIRsharing Medium Medium Medium High Medium VocBench High High Medium Medium High SSHOC Vocab Commons High Medium Medium Medium High Table 1: Evaluation Matrix 6.1.1    LOV (Linked Open Vocabularies) • Coverage (SSH): Medium – LOV includes several relevant vocabularies for SSH, but it lacks deep domain-specific coverage. • Interoperability: High – It fully supports linked data principles and uses standard formats like RDF and OWL. • Usability: High – The platform. is user-friendly, with a clean interface and clear navigation. • Sustainability:  High – It is well-maintained with regular updates and long- term availability. • Community Engagement: High – Active user contributions and documen- tation reflect strong community support. • Multilingual: Low – Most vocabularies are only available in English, limiting multilingual support. 6.1.2 BioPortal • Coverage (SSH): Low – BioPortal is focused primarily on biomedical ontolo- gies thus they have limited SSH-relevant content. • Interoperability: High – It uses robust ontology standards and provides API and SPARQL access. • Usability: High – The interface is highly user-friendly and includes visual- ization tools for browsing ontologies. • Sustainability:  High – It is supported by Stanford thus it shows long-term institutional support and will get regular updates. • Community Engagement: Medium – There is moderate user feedback and activity, but not SSH-specific. • Multilingual: Low – Most ontologies are monolingual and there is no function to translate content to other languages. 6.1.3 FAIRsharing • Coverage (SSH): Medium –  FAIRsharing  includes  some  SSH-related re- sources, but they are not its primary focus. • Interoperability: Medium – It provides metadata standards but lacks deeper semantic integration options. • Usability: Medium – The platform. is usable but not especially optimized for SSH researchers. • Sustainability:  High – Maintained as part of FAIR initiatives, and it regu- larly gets updated. • Community  Engagement:   Medium – Some user interaction exists, but active contributions are limited. • Multilingual: Medium – Some support exists for multilingual access, but it is not applied to all content. 6.1.4 VocBench • Coverage (SSH): High – VocBench supports the creation of SSH-specific vocab- ularies and ontologies. • Interoperability: High – It supports SKOS, OWL, RDF, and integrates well with external semantic tools. • Usability: Medium – It is powerful but it is difficult for non-technical users to use. • Sustainability: Medium – Development is ongoing but depends on specific projects or institutions. • Community Engagement: High – There is strong involvement from open-source and many academic communities. • Multilingual:  High – Full support for multilingual vocabularies is built into the tool. 6.1.5    SSHOC Vocabulary Commons • Coverage (SSH): High – Specifically designed to serve SSH domains with relevant vocabularies. • Interoperability:   Medium – For RDF-based,  some vocabularies lack detailed alignment with external ontologies. • Usability: Medium – The interface is functional but could be more user friendly. • Sustainability: Medium – Continued development depends on project funding and EU infrastructure. • Community Engagement: High – Community contributions and involvement in development are actively encouraged. • Multilingual: High – Many vocabularies are available in multiple languages, sup- porting multilingual use cases. 6.2     Integration of Ontologies by Using RDF and  SPARQL: RDF and SPARQL will be used to integrate selected ontologies from different repositories, enabling cross-domain alignment and semantic linking within the context of SSH research. 6.3 Comparative Analysis: Each repository is evaluated against the above criteria, highlighting strengths and weaknesses. For example, while LOV excels in linked data integration, BioPortal offers robust ontology management tools, but is less SSH-focused. The Social Science Thesaurus provides rich domain-specific terminologies, but has limited interoperability features.  This analysis aims to provide practical recommendations to help SSH researchers select the most suitable repositories for their needs. 7 Plan The research plan is structured around six key phases: 1. Ontology and Vocabulary Repositories Selection: Identify and select repre- sentative repositories that cover diversity and are relevant to SSH research. 2. Evaluation based on Benchmarking Criteria: Assess the evaluations of ontol- ogy and vocabulary repositories in SSH is based on several key criteria:  Coverage and Completeness; Semantic Consistency; Usability and Accessibility; Interoperability; Maintainability and Sustainability; Domain Specificity; Community Engagement. 3. Achieve Integration: Apply RDF and SPARQL tools to combine ontologies from different repositories, allowing cross-domain alignment and semantic linking within the context of SSH research. 4. Comparative Analysis: Each  repository  is  evaluated  against the  above  crite- ria, highlighting strengths and weaknesses, highlighting each repository’s strengths, weaknesses, and how it fits for SSH contexts 5. Reporting and presentation: Summarize the findings, accomplish the thesis re- port and hand in. 8 Conclusion The selection of an appropriate ontology repository is crucial for SSH research.   This study benchmarks the leading repositories, offering insight into their suitability. For stu- dents, practical projects in ontology evaluation and integration provide valuable hands-on experience in Semantic Web applications. References [1] V. Atamanchuk and P. Atamanchuk.   Ontological modeling in humanities.  In In- ternational  Scientific-Practical  Conference   ”Information  Technology for  Education, Science and  Technics”. Springer Nature Switzerland, 2022. [2] K. Baclawski and T. Schneider.  The open ontology repository initiative:  Require- ments and research challenges.  In Proceedings  of  Workshop  on   Collaborative  Con- struction, Management and Linking of Structured Knowledge  at  the ISWC, 2009. [3]  T. Berners-Lee,  J. Hendler,  and  O.  Lassila.   Web  semantic.   Scientific  American, 284(5):34—43, 2001. [4]  R. G. Cˆot´e, P. Jones, R. Apweiler, and H. Hermjakob. The ontology lookup service. BMC Bioinformatics, 2006. [5]  GESIS.        Social   science   thesaurus. https://www.gesis.org/en/research/ thesaurus, 2020. [6]  T. Heath and C. Bizer.  Linked Data:  Evolving  the  Web  into  a  Global  Data  Space. Morgan & Claypool, 2011. [7] A. Kempf et al. Bartoc: A registry of knowledge organization systems.  International Journal on Digital Libraries, 2019. [8] K. Meijer, K. H. Cluster, and M. Windhouwer. The clariah fair vocabulary registry. In CLARIN Annual  Conference Proceedings, page 158, 2024. [9]  M. A. Musen et al.  Bioportal:  Ontologies and integrated data resources.   Nucleic Acids Research, 2012. [10] A. Name. Title of the article.  Journal Name, Volume:Pages, Year. [11]  The Hyve. Evaluation of fair data assessment tools, 2022.  Accessed:  2024-04-15. [12]  P. Vandenbussche  et  al.   Linked  open  vocabularies  (lov):   A  gateway  to  reusable semantic web vocabularies.  Semantic  Web  Journal, 2017. [13]  I. I. Verˇsi´c and J. Ausserhofer. Social sciences, humanities and their interoperability with the european open science cloud: What is sshoc?  Mitteilungen Der  Vereinigung Osterreichischer Bibliothekarinnen  Und  Bibliothekare, 72(2):383—391, 2019.

$25.00 View

[SOLVED] Physics FA3 Student Experiment

Physics FA3 Student Experiment 2024 - 2025 Conditions Technique Student experiment Unit Unit 2: Linear motion and waves Topic/s Topic 1: Linear motion and force Duration 10 hours of class time Mode / length Written (e.g. scientific report): up to 2000 words Individual / group Individual response; students may collaborate to develop the methodology and perform. the experiment Context You have completed the following practicals in class: ● Conduct an experiment to investigate motion of objects with constant acceleration ● Conduct an experiment to investigate the acceleration due to gravity on earth ● Conduct an experiment to investigate collisions in one dimension Task Write a scientific report based around your research question. You must modify (i.e. refine, extend or redirect) an experiment in order to address your own related hypothesis or question. You may use a practical performed in class, a related simulation or another practical related to Unit 2 Topic 1 (as negotiated with your teacher) as the basis for your methodology and research question. To complete this task you must: ●      identify an experiment to modify ● develop a research question to be investigated ●     research relevant background scientific information to inform the modification of the research question and methodology ●     conduct a risk assessment and account for risks in the methodology ● conduct the experiment ●     collect relevant qualitative data and/or quantitative data to address the research question ● process and present the data appropriately ●     analyse the evidence to identify trends, patterns or relationships ● analyse the evidence to identify uncertainty and limitations ●     interpret the evidence to draw conclusion/s to the research question ●     evaluate the reliability and validity of the experimental process ●     suggest possible improvements and/or extensions to the experiment ●     communicate findings in an appropriate scientific genre, e.g. report, poster presentation, journal article, conference presentation. The following aspects of the task may be completed as a group identifying an experiment developing a research question conducting a risk assessment conducting the experiment collecting data Checkpoints Week 1 of task: Select experiment and identify proposed modifications. Week 2 of task: Perform. experiment and process data. Week 3 of task: Analyse and evaluate evidence. Week 4 of task: Friday 30/05 8:25am FIDO -  Draft Due. Week 5 of task: Friday 06/06 8:25am FIDO - Final Due. Authentication strategies ●     You will be provided class time for task completion. ● Your teacher will collect and annotate a draft. ● You must acknowledge all sources. ●     Your teacher will compare the responses of students who have worked together in groups. ●     You must submit your response to FIDO (utilises Turnitin). ● You must submit a declaration of authenticity. ●     You will provide documentation of your progress at indicated checkpoints. Scaffolding The response must be presented using an appropriate scientific genre (scientific report) and contain: ● a research question ● a rationale for the experiment ●     reference to the initial experiment and identification and justification of modifications to the methodology ● raw and processed qualitative data and/or quantitative data ● analysis of the evidence ● conclusion/s based on the interpretation of the evidence ●     an evaluation of the methodology and suggestions of improvements and extensions to the experiment ● a reference list. An example of how one of the practicals could be modified to develop a research question. The developed research question below cannot be used for this investigation. Practical that will be modified: Conduct an experiment to investigate the parallel component of the weight of an object down an inclined plane at various angles. Research question: What is the relationship between the angle of inclination and the magnitude of the frictional force for a given rectangular-based wooden object on a given wooden surface? Steps Details Identify the independent variable to be investigated. Angle of inclination. Identify the dependent variable. Magnitude of the frictional force acting parallel to the inclined surface. Identify the methodology to be used. A rectangular wooden object will be placed on an inclined plane. The angle of inclination will be modified and the parallel component of the object’s weight will be measured using a data-logger force meter. This measured force will be subtracted from the theoretically expected value of the parallel-to-the-surface component of the weight to determine the magnitude of the frictional force acting parallel to the inclined surface. Draft research questions. What is the relationship between angle of inclination and the frictional force on an inclined surface? Present research question to teacher for approval. What is the relationship between the angle of inclination and the magnitude of the frictional force for a given rectangular-based wooden object on a given wooden surface?

$25.00 View

[SOLVED] STATS 779 Professional Skills for Statisticians 2020

Department of Statistics STATS 779: Professional Skills for Statisticians Final Test: June 11 2020 (1 PM) – June 12 2020 (1 PM) GENERAL INSTRUCTIONS * Total marks = 75. * Attempt all questions. * You have 24 hours to complete this test; however, the test is designed to be finished within 2 hours. Partial credit will be given. If you choose to spend more than two hours, please consider how many extra marks the additional time is likely to gain. While in some professional contexts it may be necessary to spend several hours trying to fix a small mistake, it is probably not worthwhile on this assessment. * By submitting your test answers you are declaring that the test is your own work. This means that for the 24-hour duration of the test, you confirm that you will not discuss the content of the test with anyone else, you will not give any assistance to another student taking this test and you will not receive any assistance from any person or tutoring service. * If there is evidence you have copied your answers you will get zero marks. SUBMISSION INSTRUCTIONS There are 2 separate upload links, one for Question 1, and one for Question 2. Note that both questions have two versions, and the version you choose depends on different digits of your UPI (the last digit for Q1, the second to last digit for Q2). Thus there are four possible versions of the test. * For question 1, you should submit both the .tex and the generated .pdf files. * Place .tex, .pdf files in a zip file entitled final-test1YourUPI.zip and submit the zip file on Canvas under “Final Test-Q1”. * For question 2, you should submit both the .Rmd and the generated .html files. * Place .Rmd, .html files in a zip file entitled final-test2YourUPI.zip and submit the zip file on Canvas under “Final Test-Q2”. 1 If the last digit of your UPI is even, reproduce stationary.pdf. Submit your versions of stationary.tex and stationary.pdf as your answer to this question. If the last digit of your UPI is odd, reproduce spectral.pdf. Submit your versions of spectral.tex and spectral.pdf as your answer to this question. Additional instructions for reproducing stationary.pdf a To create an environment to typeset propositions, you may have to use the general syntax: ewtheorem{envname}{caption} in the preamble. envname: environment name that the author would like to use for this element. caption: heading text. b Use the proof environment in the amsthm package to typeset proofs. c The bibliography entry should be in the same stationary.tex file. Additional instructions for reproducing spectral.pdf a Use framed environment in the framed package to draw the box. b Add the following line to the preamble: ewcommand{eqnum}{hfillrefstepcounter{equation}textup{(theequation)}} to use the eqnum command at the end of each item description to add equation numbers. c Use the proof environment in the amsthm package to typeset proofs. d The bibliography entry should be in the same spectral.tex file.                                                       (35 marks) 2 If the second-to-last digit of you UPI is even, you will work with file “ozdat1.csv”. If the second-to-last digit of you UPI is odd, you will work with the file “ozdat2.csv”. Both files contain daily measurements of ozone data from four cities in the midwest. The measurements are in parts per billion, and are an average over the 9 AM–4 PM period when ozone tends to be highest. The days are consecutive days in the summer of 1987, and are numbered 1–89. A separate column indicates whether the measurement was a weekday (M–F) or a weekend (S–S). You are to create a file in R markdown with the following features (each item is worth 10 marks): a Each of the following features should be in its own numbered section (three in total), using automatic numbering functionality. The title of the document should be in italics, and should be followed by your name and the date. All code, messages, and unformatted output should be hidden. b A plot of the ozone measurements using facetting to create a separate panel for each city, in a 2×2 array. Use color AND shape to indicate weekend vs weekday. The y-axis should be labeled “Ozone in parts per billion”, and the x-axis should be “Day”. A loess smooth should be added to each plot. The legend should be titled “Day of Week”. The Figure should have a caption and be able to be referred to in the text (this will be necessary in part d). c A table that contains the maximum ozone measurement for each city. This table should be constructed using code that produces the relevant values (i.e. if the data changed, the table could be reproduced automatically). The table should have a short caption/title that describes its contents, and suitable column headings. d A section with two brief sentences: “Figure [number inserted using automatic referencing] shows the pattern of surface ozone over the course of the summer for four cities. The minimum ozone reading over all cities and all days was [number produced by appropriate code, including rounding], observed in [name of appropriate city extracted by code] on day [number extracted by code]”. (Hint: function which.min may be helpful.)

$25.00 View

[SOLVED] MAR205 Autumn Session 2025 Project Brief

MAR205 Autumn Session 2025 Project Brief Cancer screening involves testing for signs of cancer or precancerous conditions in people without obvious symptoms. BreastScreen Australia is one of Australia’s three population-based cancer screening programs (with the other two being bowel screening and cervical screening). The service targets asymptomatic women aged 50-74 years, with the national programme achieving a 50% participation rate in 2021-2022 against a target screening rate of 70% (Department of Health and Aged Care, 2024). In NSW, breast cancer screening is delivered by BreastScreen NSW. The state level screening rate from 2021-2023 is 46.4% (Cancer Institute NSW, 2024), which is below the national screening rate and well below the target screening rate. Among women from culturally and linguistically diverse backgrounds, the screening rate is even lower at 34.7%. There are a variety of reasons women choose not to screen with BreastScreen NSW. Some women screen with private health care providers, others choose not to screen at all, while some women are lapsed users of the service, i.e. they have used the service before but do not return for their biennial screen. Research in the services marketing literature points to the interpersonal, technical, administrative, and environment quality as factors that influence service users’ perceptions of the quality of a service (Dagger et al., 2007). This in turn influences satisfaction with the service, and intentions to use the service again in the future. Other marketing research, specifically on breast cancer screening suggests that women are more likely to screen in the long term if they perceive value from their screening service experiences (Zainuddin et al., 2013). Your task is to develop a marketing research proposal (Assessment 1) for BreastScreen NSW, then develop a survey instrument for the purposes of conducting primary research (Assessment 3). You are then required to analyse the data you have collected and develop a set of recommendations for BreastScreen NSW based on your results (Assessment 4) to help the organisation increase their screening rates. References AIHW (2024). BreastScreen Australia monitoring report 2024. Available at: https://www.aihw.gov.au/reports/cancer-screening/breastscreen-australia-monitoring-report-2024/summary Cancer Institute NSW (2024) Breast screening participation rates. Available at: https://www.cancer.nsw.gov.au/what-we-do/nsw-cancer-plan/performance-index/breast-screening-participation-rates Dagger, T.S., Sweeney, J.C., & Johnson, L.W. (2007). A Hierarchical Model of Health Service Quality: Scale Development and Investigation of an Integrated Model. Journal of Service Research, 10(2), 123-142. Department of Health and Aged Care (2024) BreastScreen Australia National Policy and Funding Review. Available at: https://www.health.gov.au/our-work/breastscreen-australia-national-policy-and-funding-review#goals Zainuddin, N., Russell-Bennett, R., & Previte, J. (2013). The value of health and wellbeing: an empirical model of value creation in social marketing. European Journal of Marketing, Vol. 47(9), pp. 1504-1524.

$25.00 View