Computer Science and Engineering CS 6083, Spring 2025 Project #2 (due May 14) April 28, 2025 In the second part of the project, you have to create a web-based user interface for the online pinboard service database designed in the first project. In particular, users should be able to register, create a profile, log in, create boards, add pictures to their boards via pinning, create streams to follow other boards, repin pictures, like pictures, send and answer friend requests, etc., as described. Note that you have more freedom in this second project to design your own system. You still have to follow the basic guidelines, but you can choose the actual look and feel of the site, and offer other features that you find useful. In general, design an overall nice and functional system. There will be limited extra points available for a nice and smooth design. If you are doing the project in a group, note that both students have to attend the demo and know ALL details of the design. So work together with your partner, not separately. Also, slightly more will be expected if you are working in a team. Start by revising your design from the first project as needed. In general, part of the credit for this project will be given for revising and improving the design you did in the first project. A note about the interface you are expected to build for this project. It is of course central that users be able to see a board with pictures, basically a web page containing many pictures arranged from top to bottom, maybe with their names, tags and numbers of likes, and maybe as thumbnails with several pictures in each row. (The pictures in a follow stream could be displayed in the same way.) To do this, you may want to look for libraries that allow you to create thumbnails from pictures. Also, repinning, liking, and commenting could be done via appropriate buttons for each picture, but this is up to you. Users should be able to perform all operations via a standard web browser. This should be implemented by writing a program that is called by a web server, connects to your database, then calls appropriate stored procedures that you have defined in the database (or maybe send queries), and finally returns the results as a web page. You can implement the interface in several different ways. You may use frameworks such as PHP, Java, Ruby on Rails, or VB to connect to your backend database. Contact the TAs for technical questions. There also should be ways for users to search certain streams, or all streams on the site, by typing in keywords that are matched against tags or names. Every group is expected to demo their project to one of the TAs at the end of the semester. If you use your own installation, make sure you can access this during the demo. One popular choice is to use a local web server, database, and browser on your laptop, which means you need to bring your own laptop to the demo. (In this case, your project does not have to be available on the public Internet; it is enough to have it run locally on your laptop). Also, one thing to consider is how to keep state for a user session and how to assign URLs to content – it might be desirable if users could bookmark a picture, a board, a user profile, or the results of a search. Grading will be done on the entire project based on what features are supported, how attractive and convenient the system is for users, your project description and documentation (important), and the appropriateness of your design in terms of overall architecture and use of the database system. Make sure to input some interesting data so you can give a good demo. Describe and document your design. Log some sessions with your system. Bring your description (documentation) and the logs in hardcopy to the demo. You should also be able to show your source code during the demo. The documentation typically consists of 15-20 pages of carefully written text describing and justifying your design and the decisions you made during the implementation, and describing how a user should use your system. Note that your documentation and other materials should cover both Projects 1 and 2, so you should modify and extend your materials from the first project appropriately.
TRP 216 Urban Analytics Level 2 Assessment brief: GIS Technical Report (Individual: 75%) Introduction This assessment is aimed at developing your spatial analysis, mapping and critical analysis skills. The principal output is the production of an individual technical report that illuminates a selected planning-problem or challenge. Objectives This assessment is designed to develop and test your achievement of all of the module objectives: ● Objective 2: Appreciate basic ethical, scientific and technological issues related to the use of spatial and aspatial secondary data for urban planning. o Your work should be rooted in a planning-related problem or challenge leading to an understanding of how spatial data might be used to illuminate discrete problems or challenges and why certain approaches and techniques are adopted over others. ● Objective 4: Use appropriate software to perform. basic quantitative and spatial methods of data analysis to answer problems in the context of planning and understand their implications o Your work should clearly show how the application of a range of spatial analytical techniques to socioeconomic data using GIS contributes towards understanding of your chosen planning-related topic/problem ● Objective 5: Visualise and critically interpret secondary data using appropriate graphical and spatial methods o Your work should demonstrate the effective use of data and data visualisation as evidence for planning-related issues The Task You are a consultant who has been commissioned by a client to undertake a spatial analysis to evidence a planning problem or challenge. As part of this exercise you should choose ONE of the project briefs and associated data package from those that will be made available in the Assessment 2 folder on Blackboard. Using some or all of the components of this data package, you will develop a technical report in which you will: ● Provide a clear introduction and background to your chosen planning/policy problem or challenge. ● Select components of the data package that you deem relevant to analysing the chosen planning/policy problem. ● Provide a technical summary of the methodology you used to develop the approach taken, and the rationale for doing so. ● Provide a written commentary on the analysis and maps to aid the clients understanding of the planning problem. ● Suggest potential policy implications and or areas in need of further analysis. Assignment Information Explanation “written report” This means that your report should be handed in as a document written on a computer. It can be helpful to divide it into sections with sub-headings (e.g. Introduction, Conclusion). “1,500 word” The report should be 1,500 words (+/- 10%). The range is therefore 1,350 to 1,650 words. If it’s shorter than this, you will have to say a little more. If it’s longer than this, you’ll have to cut it down. Font size, type, etc. You can use any font you like, so long as it is easy to read and a minimum of size 12. Paper size You submit your work electronically but try to use A4 portrait size as a standard setting. Use of images and charts In academic work, images include charts, maps and photos. They should be labelled as a Figure and should be numbered consecutively (e.g. Figure 1, Figure 2, Figure 3) and referred to in the text. You should give the figure a caption that gives some indication of the content and acknowledges the source (this is very important). Use of tables The use of tables can also be effective. If you use a table, make sure it is referred to in the text and labelled correctly (e.g. Table 1, Table 2, Table 3). You should give the table a caption that gives some indication of the content. As with images, always acknowledge the data source within the caption.
Hydrosystems Engineering (EACEE 3250 / 4250) Spring 2025 Homework #3 (Due Monday April 7th, 11:59 pm) Homework Guidelines: Your solutions to homework assignments will be submitted and graded through Gradescope (see the Gradescope tab on your Courseworks dashboard). You will have two options for submitting your work in Gradescope, either: 1) upload individual scanned images of your handwritten pages (e.g., using your phone), one or more per question; or 2) upload a single PDF that you create which contains the whole submission (e.g., merge files on your computer or phone with a software of your choice). Please use the naming convention Lastname_HWxx.pdf when submitting your homework assignment. You may choose to type up your calculations, in which case show all your steps and highlight your solution. Note: During the upload, Gradescope will ask you to mark which page/s each problem is on (see example here). It is important that you follow that step for grading purposes. It is acceptable to discuss problems with your colleagues, and questions are encouraged during office hours, but all work must be done independently. Make sure to clearly show all work on each problem and that your solutions are presented in an orderly fashion. It is your responsibility to make your solutions easy to grade. Topics/Chapters covered: • Chapter 8: Evaporation and transpiration processes • Chapter 7: Soil/porous media properties, unsaturated zone, infiltration Problem #1 Evaporation (20 pts) Estimate evaporation from a lake surface (in [mm/day]) if the water temperature is 20 °C and the waves are 5 cm high. A micro-meteorological station with a 2 m tower measures the air temperature at 15°C, the air relative humidity at 80% and a windspeed of 1 m/s. Assume a nominal surface air density p = 1.2 kg/m3 and surface air pressure P = 105 Pa. Problem #2 Evapotranspiration (25 pts) Some plants may be more efficient in photosynthesis when the atmosphere is enriched with CO2. Increased CO2 levels may also raise air temperatures. An experiment to test the potential impacts of greenhouse gases on vegetation is performed in a chamber. The air temperature is raised by 2.0 [C] under the experimental conditions listed below. These environmental conditions are maintained close to these nominal values. Leaf Porometer (see picture) measurements of vapor release from the leaf surfaces indicates a 10% increase in plant transpiration. What is the change in the plant stomatal resistance of the plants in the chamber before and after the CO2 enrichment? Note: The before enrichment case is referred to as the "Control" case in the experiment. Experimental Conditions Net Available Energy R n - G = 200 [W m-2 ] Air relative humidity = 80% Air aerodynamic resistance r = 30 [s m-1 ] Air pressure P = 105 [Pa] Moist air density= 1.3 [kg m-3 ] Air temperature and latent heat flux (Control): T = 23 [ C ] and LE = 200 [W m-2 ] Problem #3 Soil Properties (25 points) A soil sample is analyzed via a sieve analysis and determined to have the following size distribution: 40% sand and 30% silt. a) Identify the soil texture for this soil sample, and provide its porosity, saturated hydraulic conductivity, and saturated matric head based on the appropriate in the Margulis textbook. b) Plot the matric head vs. volumetric soil moisture and hydraulic conductivity vs. volumetric soil moisture trends for this soil. You may use your plotting software of choice. c) Compute matric head and conductivity at a volumetric soil moisture content (θ = 0.40). What is the sign of matric head? Explain why. d) On the same figure as (b) plot the matric head vs. volumetric soil moisture and hydraulic conductivity vs. volumetric soil moisture trends for a soil with 20% sand and 30% silt. Comment on the differences in the plot trends. Problem #4 Infiltration (30 points) The Philip and Green-Ampt equations provide models for “infiltration capacity” (or “potential infiltration”). a) What assumptions are used in the development of the Philip and Green-Ampt infiltration capacity models? Clearly explain the difference between actual infiltration rate and infiltration capacity (potential infiltration). b) Under what specific conditions are the two models the same? c) A snowmelt event lasting 4 hours occurs with a uniform melt flux intensity of 0.65 mm/hr. The volumetric soil moisture at this site is measured to be 0.02. Assume the following soil hydraulic properties: saturated hydraulic conductivity of 0.00125 cm hr-1, saturated matric head of -21.8 cm, porosity of 0.39, and Brooks-Corey parameter b of 4.9. Is it possible that infiltration excess runoff will occur in the watershed given the same snowmelt event described above? Justify your response. Using both the Philip and Green-Ampt models, compute the time to ponding under these conditions. Will ponding occur for these conditions? Explain your reasoning.
Numerical Methods 2024/5: Individual Project • This work will count for 50% of your final mark for Numerical Methods. • You must answer the question assigned to you . No marks will be awarded for answering other questions. • The mark breakdown is as follows . Analysis 60 Implementation & testing 20 Good programming practice 15 Overall presentation 5 Total 100 • The work does not require the use of external sources; any sources you do use (aside from the course materials) must be properly cited. • Store all files on One Drive or the M drive to protect against loss. • Save your Maple work regularly. Executing incorrect codes may cause Maple to become trapped in an infinite loop. If this happens, you can try pressing the interrupt button (①), but you may be forced to close the application and reload your work. • There is no requirement to type your analytical work; scans of handwritten work are equally acceptable provided they are properly organised and readable. • Submit work as a single pdf file. See the project guidance notes for instructions on merging and rearranging pdf files. • Your final submission must include a pdf export of your Maple worksheet. If you work the numerical calculations into a report (e.g. by copying parts into MS Word and adding appropriate explanations), you still need to include the Maple worksheet; just add it as an appendix at the end. • Invalid submissions (e.g. files in formats other than pdf) will be deleted. Students who make invalid submissions will be given another chance to submit, but this will be treated as late, and subject to standard university penalties (5% deduction for each day, and a mark of zero after five days) . This problem is concerned with integrals of the form. (*) Here, h(0) exists but may not be equal to zero, so f (x) can be unbounded in the limit x → 0. (a) Consider the case in which h(x) = 1 for all x, so that the integrand is just f (x) = ln(x) . (i) Obtain a simplified form of the general formula for quadrature error in this case. Write the error for the whole interval in terms of the function You may assume that S1 = 0, since this is the case for all nontrivial quadrature rules. (ii) Include the following table in your submission, and fill in the results with values accurate to at least three significant figures. (iii) Suppose I is estimated twice using the same quadrature rule, first with N subintervals of equal size and then with 2N . What relationship do you expect the errors in the two estimates to (approximately) satisfy? Justify your answer . (iv) What do you think is the cause of this? (b) Using the three-point Gaussian quadrature procedure from the NumericalMethods package (or solutions7. mw), or your own version, estimate I for the case with N = 20 and then with N = 40 . Obtain numerical estimates for the absolute errors in your approximations. Are the results consistent with your analysis in part (a)? Why (or why not)? (c) (i) Consider the monic cubic polynomial where A, B and C are constants. Use integration by parts to show that if r ≥ 0 then In your answer you may use without proof the fact that limx→0+ xlnx = 0 . (ii) Find exact values for A, B and C such that the above integral vanishes if r = 0, 1 or 2. (iii) Let x1 , x2 and x3 represent the roots of u(x) (with A, B and C as in part (ii)), arranged in ascending order . Find weights w1 , w2 and w3 such that Use Maple to avoid boring algebra; you should find that w3 = -0.0946 . . . (d) If h(x) is a polynomial of degree d, what is the maximum value for d such that the integral in (*) is always integrated exactly by the quadrature rule from part (c)? Justify your answer. Hint: the justification is very similar to a result presented in the lecture notes. You don’t need to write out the relevant argument in full. (e) (i) Write a Maple procedure that approximates integrals of the form (*) using the quadrature rule derived in part (c) . The procedure should takes as its argument a function h, and return the resulting estimate as its result. The procedure should not recalculate the nodes and weights; it should use values stored to at least ten correct significant figures. (ii) Test your procedure with function h from part (b). Obtain a numerical estimate for the absolute error in this approximation and find the approximate number of subintervals needed to achieve the same level of accuracy using the three point Gaussian rule.
FIT5226 - Individual Assignment Multi-agent learning of a coordination problem This document describes the FIT 5226 project which is your individual assignment. It is worth 50% and is due on May 5, 2025. Note that there is an option to submit part of the assignment early that makes variations of the task accessible (see below, read the instructions carefully and completely now). “Give me six hours to chop down a tree and I will spend the first four sharpening the axe. ” — Commonly misattributed to Abraham Lincoln Tasks You will write code for multiple agents in a square grid world to learn a simple transport task that requires coordination between the agents. You are free to use Deep-Q or Tabular-Q as you see fit. You are not allowed to use any learning methods (eg. heuristics) beyond Q. Your agents still have the same four actions that they can execute: “move north/south/west/east” . A “wait” action in which the agent remains on the same grid square is not allowed. As before, there are two distinct cells A and B. The cells A and B can be in any location and agents are allowed to observe both locations. Each agent starts at one of the two locations A or B (during training, you are free to choose or randomise which one). As before, items need to be transported from A to B. Pickup at A and dropoff at B happen automatically when reaching the respective location without the agent having to take specific action. A and B are the same for all agents (but allocated randomly for each new training and test run). Instead of performing a single delivery, agents now need to learn to shuttle indefinitely between A and B, continuing to deliver new items. (The supply at A and the demand at B are unlimited). In other words, the task is not episodic anymore, the agents are learning an infinite behaviour. Most importantly, the agents now have to learn to coordinate their paths in order to avoid collisions! More specifically, they have to avoid head-on collisions between different agents. To simplify matters, we define a head-on collision as one that occurs between an agent moving from A to B and another agent moving from B to A (see red cell in the diagram). It is permissible for multiple agents to step on the same cell if and only if all move from A to B (or all move from B to A). Collisions in locations A and B are disregarded. To keep the learning time within limits we work on a small 5x5 grid as before. We use 4 agents (you must not vary these numbers as otherwise the collision counts in Table 2 will change). All agents are allowed to share the same DQN or Q-Table or use their individual q- table/network as you see fit. The agents may observe their own location, the locations of A and B, and, of course, whether they are carrying an item. Note that you can “buy” additional sensors to observe more information if you find this helpful (see below). All agents normally have to perform. their actions sequentially in a random order. Simplifying the task by “purchasing” options You have the option to balance aiming for higher performing agents with the complexity of the problem structure. The performance that your agents reach influences the marks you will receive for your solution and you can “purchase” options to simplify the task structure so that it becomes easier to train high performing agents. The catch, however, is that you will have to “pay” for each option that you select with points. You can compensate for the “purchase price” by training a better performing agent that is rewarded with “performance points” or you can solve the unmodified task and settle for a lesser performance. The table below details the options available and their impact on marks received. Your raw mark for the implementation categories “Solution and Functionality” and “Implementation and Coding” of the rubric will be scaled by a factor of where C is the sum of costs for your chosen options and B the sum of points for the substantiated performance. In other words, for each 2 points of purchase cost that are not compensated for by outstanding learning performance you will lose ⅓ of the marks in these categories. Table 1: Options available Option Type Item Description Cost Sensors State of neighbouring cells You may augment the observation space for all agents with the occupancy state of all cells or chosen cells in the agent’s immediate 8- neighbourhood (unoccupied, occupied) 2 State of neighbouring cells checked for agents of opposite type As per previous entry, only cells that contain an agent going in the opposite direction (as defined above) will be marked as occupied. 3 Coordination Central clock This allows you to coordinate the update schedule of your agents. Instead of having to update all agents in random order (as described above) you can update them in round- robin fashion or any other order that you determine. 1 Training conditions Off-the-job training Instead of having to learn with each episode starting from random locations for all agents and A, B you can define a schedule for the configuration at the start of training episodes. 2 Staged Training Instead of letting the agents learn everything in a single type of training run you may break the training into different phases (eg. with different grid sizes, different numbers of agents, different penalty schemata, etc.) Q-tables or q-networks may be passed between the stages. 3 Setup Fixed delivery location B Instead of having to service an (observable) random delivery location, the target location B is always in the bottom right corner of the grid. However, if you choose this option, the agents can no longer observe the location of B. Instead, they have to discover it. 4 Table 2: Performance Points You can receive up to 2 performance points for two categories, so that a maximum total of 4 is possible. To receive these you must provide code that measures the claimed performance! Performance level after training (percentage of scenarios solved in less than 20 steps and without any collisions) Total number of collisions occurring during training to the claimed performance level Performance Points >95% 85%
DECO 1400: Web Design Website Implementation Key Information: Website Source Code & Poster: • Due: 30 May 2025, 4:00pm • Weighting: 60% • Submission: via submission link on Blackboard (Must include cover sheet) Website Presentation: • Due: Week 13, during your usual practical session • Submission: in class Demo: • Due: Week 11, during your usual practical session • Submission: in class Website Implementation & Poster (60%) You should use HTML5, CSS and JavaScript. to implement your website (4-8 web pages). The website should be responsive, and work on tablet and mobile devices as well. The topics you can choose from: your favourite holiday destination, muffin recipes, cars, photography, portfolio. Alternatively, you may choose a topic of your own. Feel free to discuss the topic with your teaching team or myself, if you are unsure. The sky is the limit! :-) You may continue working on the same topic you chose for the Web Design assessment or pick a new topic, if you are not able to build on your design ideas or have new ideas. Please feel free to discuss your topic with the teaching team. Your website files & folders should follow the structure below: • index.html o Your home page file should always be named index.html, stored at the root. • [...].html o Include all other HTML files for your website • css/ o style.css Your main CSS file should be named style.css. o Where possible & to improve portability of your website files, ensure any CSS files you've used with JavaScript plugins are downloaded locally and placed in this folder (if you're using any plugins). • images/ o All images should be contained in this folder. • js/ o script.js Your main JavaScript. file should be named script.js. o Where possible & to improve portability of your website files, ensure any JavaScript. files you've used with JavaScript plugins are downloaded locally and placed in this folder (if you're using any plugins). Also, download a copy of jQuery JavaScript file and place it in this folder as well (if you're using jQuery for this assignment). • files/ o All other files (e.g. Poster, document containing your Reference list, README) should be contained in this folder. Zip the folder, name it: FirstnameLastnameStudentID.zip (e.g.ClaireZhao123456.zip) and submit it on Blackboard by 30 May. You are not allowed to use a website builder for this project. Use of 3rd party frameworks such as Bootstrap, React etc. is allowed but discouraged. There are no requirements for back-end development in this course. You are welcome to try it out, if you wish, but that's optional and only for those students who have prior experience. Poster: Please also prepare an A1 sized poster outlining a brief overview of your website and the potential audience, your low and high-fidelity design prototypes, screenshots of the website, what lessons you learnt along the way (reflection) and future work (extra features that can be added, enhancements that can be made and/or things you would have done differently, if you had more time). The poster should be submitted on Blackboard by 30 May. No need to print the poster. These links might be useful, if you are not sure how to design a poster template: https://ugs.utexas.edu/our/poster/templates https://ugs.utexas.edu/our/poster/samples In-class Website Presentation - Week 13: You will have maximum of 5 minutes to present your website to your demonstrators during your usual practical session in Week 13. That includes presenting a live/recorded demo of the website. Please do not go over time. Work in progress must be demoed to teaching staff during Week 11. Your website does not need to be completed by Week 11. You might not have responsive design implemented yet or some pages might still be work in progress, but you need to demonstrate progress towards completion. The teaching team will monitor students' progress and ask questions to ensure that students are completing their own work. As this is an Identity verified assessment, you must have your student ID on you. Passing the course In order to pass the course as a whole, students must pass the Website Implementation assessment, with a final grade of at least a Pass (4). Failure to meet this requirement will result in the final grade being capped at Fail (3), regardless of performance in other assessment items. Please refer to the Course Profile for more information. Plagiarism Please note this is an individual assessment and it must be your own, original work. If you are using any content from the Internet, or course content, make sure you include the reference in the comment section of your code. Alternatively, a reference list can be submitted as a .doc file with the source code. Plagiarism is considered a serious offence at UQ. Failure to declare the distinction between your work and the work of others will result in academic misconduct proceedings. You may include any work you produced for Practical sessions (and acknowledge the source), however you must not use another student's work or copy and paste from the Internet, without referencing it. Check with your teaching team, if you need further clarification or post your questions on Ed Discussion forum. For details about Extensions or Late submissions, please refer to the Assessment section of the course profile.
Master of Business Administration Assignment Submission Form Module Code: MMN7031SR Module Title: Global Strategy and Innovation Assessment Title: Assessment 1: Group Presentation Assessment due date: April 2025 MN7032SR Global Strategy and Innovation Academic Year 2025/26 Assessment 1, Group Presentation (3 to 4 members) Ppt slides 10 to 15 slides First Marker: Second Marker: Title of presentation: Group Presentation on The Virgin Group Assessment criteria Level of achievement 1st Marker 2nd Marker An introduction of The Virgin Group (10 marks) To analyse and summarise Virgin Group and present an Executive Summary Past and Current Performance (20 marks) An analysis report on · Revenue and profit (by products/regions) · Production strategy · R & D activities · Marketing strategy · Financial report (return on capital and shareholders) Competitive Position (20 marks) An analysis report on competitors · Financial performance · Production capacity · Marketing strategy · Prediction of their future strategy Your strategy (30 marks) A business plan consists of · Vision and mission · Market and sales forecast · Financial forecast · Product roadmap and R & D strategy · Manufacturing and suppliers · ECG strategy Group Reflection (10 marks) Provide a report of your group member’s reflection Presentation (10 marks) Structure and format In text citation and references Total marks Areas for improvements From First Marker Knowledge and understanding Analysis and evaluation From Second Marker Knowledge and understanding Analysis and evaluation Agreed Mark First marker’s marks/date: Second marker’s marks/date:
COMP5318/COMP4318 Machine Learning and Data Mining Semester 1, 2025 Assignment 2: Image Classification Key information Deadlines Submission: Monday week 11 (12 May), 11.59pm Late submissions policy Late submissions are allowed for up to 3 days late. A penalty of 5% per day late will apply. Assignments more than 3 days late will not be accepted (i.e. will get 0 marks). The day cut-off time is 11:59pm. Marking This assignment is worth 25 marks = 25% of your final mark. It consists of two components: code (10 marks) and report (15 marks). A marking guide for both the code and report is included at the end of this document. The assignment can be completed in groups of 2 or 3 students. No more than 3 students are allowed. See the submission details section for more information about how to submit. Submission Three files are required to be submitted in the relevant submission portals on Canvas: - Your report as a .pdf file - Your Jupyter notebook as a .ipynb file - Your Jupyter notebook as a .pdf file A pdf of your Jupyter notebook can be generated using File>Download as>PDF or Print Preview > Save as PDF or File->Download as >HTML, then open the html file and save it as pdf. Name your files with the following format: - Report: o A2-report-SID1-SID2-SID3.pdf - Code: o a2-code-SID1-SID2-SID3.ipynb o a2-code-SID1-SID2-SID3.pdf where SID1, SID2 and SID3 are the SIDs of the three students in your group. Please do not include your names anywhere in your submissions or the file names since the marking is anonymous. Before you submit, you need to create a group in Canvas. Under the “People” page on Canvas, select the “A2 groups” tab. You and your group partners should choose one of the empty groups listed under this tab, and join it. Groups have a maximum of 3 members. The assignment should be completed in groups of 2 or 3 students. Important: The "A1 group-set1" and "A1 group-set2" registration is for Assignment 1, the "A2 groups" registration is for Assignment 2. You need to register a group under "A2 groups" even if you work with the same partner as in Assignment 1. Please be careful as otherwise your mark may not be recorded correctly. There may also be a mark deduction for not following the instructions. Then you need to submit your assignment on behalf of the group in the corresponding submission box. Only one student from the group needs to submit, not all. Code information The code for this assignment should be written in Python in the Jupyter Notebook environment. Please follow the structure in the template notebook provided as part of the assignment materials. Your implementation of the algorithms should predominantly utilise the same suite of libraries we have introduced in the tutorials (Keras, scikit-learn, numpy, pandas etc.). Other libraries may be utilised for minor functionality such as plotting, however please specify any dependencies at the beginning of your code submission. While most of your explanation and justification can be included in the report, please ensure your code is well formatted, and that there are sufficient comments or text included in the notebook to explain the cells. You may choose to run your code locally or on a cloud service (such as Google Colaboratory), however your final submission needs to be able to run on a local machine. Please submit your notebook with the cell output preserved and ensure that all the results presented in your report are demonstrated in your submitted notebook. Your code may also be rerun by your marker, so please ensure there are no errors in your submitted code and it can be run in order. Task In this assignment, you will implement and compare several machine learning algorithms, including a Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN), on an image classification task. You will need to demonstrate your understanding of a full machine learning pipeline, including data exploration and pre-processing, model design, hyperparameter tuning, and interpreting results. Moreover, the assignment will require you to consolidate your knowledge from the course so far to effectively discuss the important differences between the algorithms. While better performance is desirable, it is not the main objective of this assignment. Since chasing the best possible performance may require large models and specialised GPU hardware, you should focus on thoroughly justifying your decisions and analysing your results. Please see the marking criteria at the end of the specification for how you will be assessed. There are no marks allocated for reaching a particular performance or having highly complex models. 1. Code As mentioned above, your code submission for the assignment should be an .ipynb Jupyter notebook in similar fashion to the tutorials, including well commented code and sufficient text to explain the cells. Please follow the structure provided in the template notebook available as part of the assignment materials. Data loading, pre-processing, and exploration The dataset you will use in this assignment is PathMNIST. It contains 28x28 colour images of microscope slides of normal and abnormal body tissues. More information on the dataset and the original source can be found in the associated paper here: https://www.nature.com/articles/s41597-022-01721-8 . The dataset is licensed under CC BY 4.0 (attribution is provided below). Note that we have provided a subset of this dataset for the assignment, with different splits to the original. Please refer to the provided dataset on Canvas rather than downloading from the MedMNIST site. Code to load these files is provided in the template notebook. The images in this dataset have relatively low dimensionality to aid keeping your runtimes short. You can increase/decrease the dimensionality of the data or make use of a subset of the training data as required, with justification in the report. To better understand the properties of the data and the preprocessing that may be appropriate, you should begin with some exploration of the data. You may like to explore what the different classes are, the number of examples from each class, and consider characteristics of the images such as whether they are centred, size of different features in the images, pixel intensities across different images, key differences between classes etc. Explore if there are factors that might make the task more difficult, such as classes with similar features. You should include anything you feel is relevant in this section. Based on your insight from the data exploration and/or with reference to other sources, you should apply appropriate preprocessing techniques. Your choice of preprocessing needs to be justified in the report. You may apply different preprocessing techniques for the different algorithms, with justification in the report. Consider if you need to make any additional splits of the data and think carefully about how each part of the data should be utilised to evaluate hyperparameter combinations and compare the performance of the different models. Algorithm design and setup You will be required to design and implement three algorithms that we have covered in the course using the sklearn and/or keras libraries, in order to investigate their strengths and weaknesses. 1. An appropriate algorithm of your choice from the first 6 weeks of the course 2. A feedforward multilayer perceptron neural network (MLP) 3. A convolutional neural network (CNN) In this section, implement an instance of each model before tuning hyperparameters, and set up any functions you may require to tune hyperparameters in the next section. Due to runtime constraints, it is not feasible to consider every possible neural network architecture when designing your models, however you should justify your design decisions in the report, and you may wish to conduct a few rough experiments to converge on a reasonable design. Remembering that the focus of the assignment is to demonstrate your understanding of and compare the algorithms, not to achieve state-of- the-art performance, keep your models appropriately small such that you can complete the hyperparameter tuning in a reasonable time on your hardware of choice. Although you may like to reference external sources when designing your algorithms, you must implement your neural network models yourself, rather than import prebuilt models from Keras (such as those available in keras.applications). Hyperparameter tuning Perform. a search over relevant hyperparameters for each algorithm using an appropriate search strategy of your choice (eg. cross validation, validation set grid search, Bayesian search, etc.). You may use scikit-learn and/or keras-tuner to perform. your searches, and you are suggested to look through the documentation of these packages to see the available utilities and decide on your implementation. Remember there are many factors that can affect the runtime of your search, such as the number of epochs, optimisers, and the particular hyperparameters included in the search. Please ensure you search over the following: o Algorithm of your choice from the first 6 weeks: o You will need to choose an appropriate set of hyperparameters to search over based on what was covered in the tutorials or other references o Multilayer feedforward and convolutional neural networks: o Tune over at least 3 different hyperparameters You will need to justify your search method, choices of hyperparameters to search over, and the values included in the search as part of your report (see below). Keep a record of the runtimes and results with each hyperparameter combination (you may need to consult documentation to see how to extract this information) and use these to produce appropriate visualisations/tables of the trends in your hyperparameter search to aid the discussion in your report. Please preserve the output of these cells in your submission and keep these hyperparameter search cells independent from the other cells of your notebook to avoid needing to rerun them, as markers will not be able to run this code (it will take too long), i.e. ensure the later cells can be run if these grid search cells are skipped. Final models After selecting the best set of hyperparameters for each model, include cells which train the models with the selected hyperparameters independently of the parameter search cell, and use these implementations to compare the performance (and other relevant properties) of the different models using the test set. 2. Report An approximate outline of the report requirements is provided below, but make sure to reference the marking criteria also for the mark distribution. You may write your report in MS Word or LaTeX, but please ensure that it is well-formatted and submit it as a pdf. Please stick to the structure (headings and subheadings) outlined below, as these align with the marking criteria. Please do not include screenshots of your code in the report. The report should not be focussed on your code and describing exactly how you have implemented things, but rather on the content outlined below. Introduction State the aim of your study and outline the importance of your study. You can consider both the importance of this dataset itself, but also the importance of comparing algorithms and their suitability for the task more generally. Data In this section, you should describe the dataset and pre-processing. Data description and exploration. Describe the data, including all its important characteristics, such as the number of samples, classes, dimensions, and the original source of the images. Discuss your data exploration, including characteristics/difficulties as described in the relevant section above and anything you consider relevant. Where relevant, you may wish to include some sample images to aid this discussion. For information about the original dataset, such as the class names, the MedMNIST site (https://medmnist.com) and papers included in the dataset attribution below may be useful. Pre-processing. Justify your choice of pre-processing either through your insights from the data exploration or with reference to other sources. Explain how the preprocessing techniques work, their effect/purpose, and any choices in their application. If you have not performed pre-processing or have intentionally omitted possible preprocessing techniques after consideration, justify these decisions. Methods In this section, you should explain the classification methods you used. Theory. For each algorithm, explain the main theoretical ideas (this will be useful as a framework for comparing them in the rest of the report). Explain why you chose your particular algorithm from the first six weeks. Strengths and weaknesses. Describe the relative strengths and weaknesses of the algorithms from a theory perspective. Consider factors such as performance, overfitting, runtime, number of params and interpretability. Explain the reasons; e.g. don’t simply state that CNNs perform better on images, but explain why this is the case. Architecture and hyperparameters – State and explain the chosen architectures or other relevant design choices you made in your implementation (e.g. this is the place to discuss your particular neural network configurations). Describe the hyperparameters you will tune over, the values included in the search, and outline your search method. Briefly explain what each hyperparameter controls and the expected effect on the algorithm. For example, consider the effects of changing the learning rate, or changing the stride of a convolutional layer. Justify why you have made these choices about the search method, and hyperparameters included. Results and discussion In this section, you should present and discuss your results. Please do not include screenshots of raw code outputs when presenting your results. Instead tabulate/plot any results in a manner more appropriate for presentation in the report. Begin with the hyperparameter tuning results. Include appropriate tables or graphs to illustrate the trends (performance, runtime etc.) across different hyperparameter values. Discuss the trends and provide possible explanations for their observation. Consider if the results aligned with your predictions. Next, present a comparison of the models you have implemented for this task (with the best hyperparameters found). Include a table showing the best hyperparameter combination for each algorithm, the performance on the test set (e.g. accuracy and other performance measures), and the runtime(s). Analyse and discuss the results, referring back to the theoretical properties and strengths/weaknesses of the classifiers discussed in the Methods section. Consider if the results aligned with your expectations. What factors influenced the runtime (time per epoch, total number of epochs required etc.)? Include anything you consider interesting and/or relevant. For example, you may like to comment on the types of mistakes particular models made via confusion matrices etc. Conclusion Summarise your main findings, mention any limitations, and suggest future work. When making your conclusions, consider not only accuracy, but also factors such as runtime and interpretability. Ensure your future work suggestions are concrete (e.g. not in the spirit of “try more algorithms”) and justify why they would be appropriate. Reflection Write one or two paragraphs describing the most important thing that you have learned while completing the assignment. Every student should write their own reflection. References Include references to any sources you have utilised in completing the code and/or report. You may choose any appropriate academic referencing style, such as IEEE.
ELEC3213/ELEC6222– Coursework 2024/25 Discussed on: 07 February 2025 Due Date: 16 May 2025, 18:00 Submission Format: Electronic Only (including zip file for any additional data you wish to submit) on Blackboard (I will create a submission link closer to the due date). This assignment comprises 50% of the assessment for ELEC3213/ELEC6222. You should note that you may need to acquire new knowledge (in addition to the material presented in the lectures) to complete the work successfully; many of the lectures include recommendations for further reading. As a guide, you could expect to spend around 40 hours on the completion of this assignment, not inclusive of additional reading recommended in the lectures. Objectives: The objectives of this assignment are as follows: 1. To determine an appropriate network design for a given scenario, informed by industrial and regulatory requirements. 2. To complete the sizing of components so that the network is able to operate within limits, including under outage cases. 3. To design a protection scheme, which ensures that faults within the network can be dealt with while keeping customer interruptions to a sensible minimum. 4. To evaluate your own design, and identify lessons learned which you could apply to future similar work. Report Requirements: You should submit a report that meets the following requirements: • Addresses all of the Outputs listed in the technical description which follows • Is no longer than 12 pages, inclusive of the reference list. The minimum font size is 11 (Calibri), with standard margins (2.54cm). • Where diagrams are included, they are properly referenced in the main text and are clearly readable at 100% (printed) size. • Contains a short appendix listing any additional electronic files, drawings or other resources which you have submitted electronically (all supporting files should be submitted as a .zip file as per the instructions on the blackboard for this assignment). • The report is an individual assessment; you should clearly acknowledge all sources of help. You are welcome to discuss your ideas with others in the cohort, but you must ensure that the work submitted is your own. Software Selection: There are a number of software choices available to allow you to conduct load flow and fault analysis, including one (PowerWorld) which is freely available to install on your personal computers. However, if you have experience using another tool, you can choose that instead (please discuss with the module leader first). Part 1 – Network Design The first part of your assignment involves the review of network design options. The conceptual overview of the system that you are considering is shown in Figure 1. You should note that this diagram is not intended to represent the full electrical layout, but instead gives options for the connectivity that would be available. Circuit breakers are not shown and should be inserted as appropriate. You do not need to use all lines; you should seek to provide evidence for the routes chosen. Figure 1 Overview Diagram of Network Important Points to Consider: 1) What is the required level of redundancy for this type of network? 2) How will the network perform under outage cases? How might this influence the selection of the layout? 3) Where do you need to locate switching points in order to facilitate the isolation of specific components (this links to point number 1 above). 4) The indicative diagram represents a whole bus substation with one node; this does not mean that you are restricted to using a single bus system. Also note that for power world, you don t need to represent each bus in a detailed manner, and just a single-bus representation is sufficient for each bus. But in your report, you are expected to describe the type/configuration of each bus. 5) The more circuits you add, the higher the redundancy, but the more complex the protection becomes. 6) You should consult the spreadsheet ELEC3213_ELEC6222_Student_details.xlsx, hosted on blackboard, to verify the line lengths applicable for your specific case. This document will also detail the system loadings that you are designing to (note that these differ between students). Outputs: • A short discussion of the possible design options, which describes clearly the advantages and disadvantages of the options available to you. • A clear statement of your recommended design, including a system diagram. This should show all buses, transformers, switching points, feeder circuits, loads. You do not need to consider the details of anything below B5, B7, B9, B11. • You should provide clear reasons for your choices. Part 2 – Component Sizing Having determined the appropriate layout for the network, you should select appropriate size of lines and transformers to meet your requirements. Consideration should be given to power flows under normal conditions, and also the worst-case contingency scenarios that you expect the network to endure. To assist you with selecting appropriate components, documents containing suitable asset data are provided on the course resources page; note that you would have also covered many of the calculation methods needed to calculate parameters yourself. You should also consult the spreadsheet ELEC3213_ELEC6222_Student_details.xlsx, hosted on the course resources page, to verify the line lengths applicable for your specific case. Outputs: • Discussion of approach taken to determine the size of the components. • Powerflow results to show that your network meets requirements under both normal and contingency scenarios. Part 3 – Protection Scheme Upon completion of Part 1 and Part 2, you should now have a design for your network which has been checked using your power flow model. The next step in the design is to determine an appropriate protection scheme. In doing this, you should consider the following factors: 1) How will your protection system be influenced by the redundancy requirements of your network? 2) How many switching operations will you need to undertake to clear a fault? 3) What are the appropriate protective devices to use in the different parts of the network? 4) Which devices should be coordinated together? 5) Will the coordination be affected by an outage on the system? 6) How will you ensure that system components can be safely disconnected and isolated for maintenance? Outputs: • A clear protection strategy for your network, describing the behaviour you would expect to see for certain types of faults. • The type of protective device used at each location should be identified. You do not have to identify specific manufacturers, or commercially available devices, but you should indicate the required ratings. • Three specific examples of how the protection will respond to a fault. Part 4 – Evaluation & Lessons Learned As you progress through this coursework, you will find that you learn from the experience of trying to achieve certain objectives; perhaps you will decide to take one option, but then subsequently discover that another choice was better. In the final part of the coursework, you are asked to evaluate your design. Outputs: • Discussion of challenges faced in developing the design, and lessons learned as a result. • Evaluation of the design that you have produced, including recommendations for further work that could be done to improve on it if additional information was available to you.
INFS1200/7900 Tutorial 3.1: SQL Introduction Introduction Purpose: The purpose of this tutorial is to introduce you to the basics of SQL Data Definition Language (DDL) and Data Manipulation Language (DML). You will see how even basic SQL queries can be used to complete common tasks for a database administrator. Learning Outcomes: By the end of this tutorial you will be able to: • Implement relational schemas using SQL • Evaluate and justify design choices for schema implementation • Write SQL queries to add, modify and delete data in a database • Understand the basic syntax of SQL Section A: Data Definition Language (DDL) The following is a relational schema for a blogging database. Based on this schema, write SQL DDL queries to complete the tasks in this section. Note: Where attribute data types are not directly given, use common sense to choose appropriately. Blog [blogSite, owner, dateCreated] Article [blogSite, articleTitle, articleType, lengthInWords] Foreign Keys: Article.blogSite references Blog.blogSite A.1 Write an SQL query to create the BLOG table. A.2 Write an SQL query to create the ARTICLE table, including its foreign key. A.3 Write an SQL query to add a new column to the ARTICLE table called “authorName” . Note: For the purposes of anonymity, not every article will have a recorded author. A.4 Write an SQL query to add a new constraint to the BLOG table called “UniqueOwner” which requires that each blog site have a unique owner. Section B: Data Manipulation Language (DML) Using the following revised schema for the blogging database, write SQL DML queries to complete the tasks in this section. Blog [blogSite, owner, dateCreated] Article [blogSite, articleTitle, articleType, lengthInWords, authorName] Foreign Keys: Article.blogSite references Blog.blogSite B.1 Create a new entry in the BLOG table for “Jemma Jones” , the owner of the site www.fundatabaseblogs.net, who created the site on 13/02/2020. B.2 Create a new entry in the ARTICLE table for Jemma’s blog. The title of the article was “Top 10 Reasons to take INFS1200!”, the article category was “University” , the article length was 2048 words and the author was “Hassan Khosravi” . B.3 The blog site “freewriting.info” has decided to gift each author from their site with their own blog. The author will be the owner and the address of the site will follow the format “AuthorName.freewriting.info” . Create these new entries in the BLOG table. Hint: Use the SQL CONCAT function to join two words. B.4 Increase the word length by 100 for each article written by “Jack Garcia” . B.5 After much negotiation, the blog sites “www.infs1200forever.blog” and “www.infs1200always.blog” have merged to a new website called “www.infs1200.blog”. Assuming an entry for this new website exists in the BLOG table, update all the articles for these two old sites to the new blog site. B.6 Remove all articles which have a word count over 10,000. B.7 ASIO has requested that all articles be removed which contain the following spy code words and code phrases in the title: • Foxtrot • Calculus • Cookie Monster • Disco B.8 List all blog sites that owner ‘sqlUser’ created between 1/1/2017 to 1/2/2017. B.9 Find all blog sites with an owner starting with A. B.10 List all the articles and their associated blog site in descending order of word length.
LUBS1530 Business Analytics 1 (Semester 2, 2024/2025) Assessed Coursework 100% Assignment Dataset: Boston Housing Dataset comprises town-level socio-economic data on the housing in 506 towns comprising Greater Boston including data on pollution levels. Variable definitions are included in the Excel workbook containing the dataset. Objective To understand the determinants of the median value of housing in 506 towns comprising Greater Boston. Tasks (All tasks must be undertaken using Excel. You will then use the results generated by your data analysis to complete the Answer Sheet included in this Assessment Brief. Only the completed Answer Sheet including the required screenshots is to be submitted.) 1. Calculate descriptive statistics for CRIME, ROOMS, AGE, TAX, PTRATIO and VALUE. 2. Generate histograms for CRIME, ROOMS, AGE, TAX, PTRATIO and VALUE. 3. Undertake skewness/outlier analysis and normality tests for CRIME, ROOMS, AGE, TAX, PTRATIO and VALUE. 4. Generate the correlation matrix for CRIME, ROOMS, AGE, TAX, PTRATIO and VALUE. 5. Create a scatterplot for VALUE vs ROOMS; include a linear trendline with the equation of the line and R2. 6. Estimate a general regression model of VALUE using all the variables in the dataset. 7. Develop a specific regression model of VALUE that eliminates irrelevant variables and maximises R2(adj). 8. Undertake residual analysis for the estimated specific regression including residual plots and auxiliary regression analysis. 9. Complete the answer sheet and submit on Minerva (via TurnitIn). This assignment is categorised RED for use of GenAI: you must not use any content generated by Artificial Intelligence (GenAI) in any part of this work. https://generative-ai.leeds.ac.uk/ai-and-assessments/acknowledging-use-of-ai/ Assignments should be a maximum of 2000 words in length. All coursework assignments that contribute to the assessment of a module are subject to a word limit, as specified in the assessment brief. The word limit is an extremely important aspect of good academic practice, and must be adhered to. Unless stated otherwise in the relevant module handbook (if one has been provided), the word count includes EVERYTHING (i.e. all text in the main body of the assignment including summaries, subtitles, contents pages, tables, supportive material whether in footnotes or in-text references) except the main title, reference list and/or bibliography and any appendices. It is not acceptable to present matters of substance, which should be included in the main body of the text, in the appendices (“appendix abuse”). It is not acceptable to attempt to hide words in graphs and diagrams; only text which is strictly necessary should be included in graphs and diagrams. You are required to adhere to the word limit specified and state an accurate word count on the cover page of your assignment brief. Your declared word count must be accurate, and should not mislead. Making a fraudulent statement concerning the work submitted for assessment could be considered academic malpractice and investigated as such. If the amount of work submitted is higher than that specified by the word limit or that declared on your word count, this may be reflected in the mark awarded and noted through individual feedback given to you. The deadline date for this assignment is 12:00:00 noon on Wednesday 14th May 2025. An electronic copy of the assignment must be submitted to the Assignment Submission area within the module resource on the Blackboard MINERVA website no later than 12:00:00 noon prompt on the deadline date. Faxed, emailed or hard copies of the assignment will not be accepted. Failure to meet this initial deadline will result in a reduction of marks, details of which can be found at the following place: https://students.business.leeds.ac.uk/assessment/code-of-practice-on-assessment/ SUBMISSION Please ensure that you leave sufficient time to complete the online submission process, as upload times can vary. Accessing the submission link before the deadline does NOT constitute completion of submission. You MUST click the ‘CONFIRM’ button before 12:00:00 noon for your assignment to be classed as submitted on time, if not you will need to submit to the Late Area and your assignment will be marked as late. It is your responsibility to ensure you upload the correct file to the MINERVA, and that it has uploaded successfully. It is important that any file submitted follows the conventions stated below: MITIGATING CIRCUMSTANCES If you are affected by circumstances that will have a short-term impact on your ability to complete coursework assessments (for example a minor illness), you can make an application for an extension to a coursework deadline. Please note, all extension requests must be made prior to the original assessment deadline. To read more about this process please click here - https://students.business.leeds.ac.uk/student-support/mitigating-circumstances-extensions-and-additional-consideration/ FILE NAME The name of the file that you upload must be your student ID only. ASSIGNMENT TITLE During the submission process the system will ask you to enter the title of your submission. This should also be your student ID only. FRONT COVER The first page of your assignment should always be the Assessed Coursework Coversheet (individual), which is available to download from the following location: https://students.business.leeds.ac.uk/forms-guidance-and-coversheets/ STUDENT NAME You should NOT include your name anywhere on your assignment
Module 3 – Apache Hadoop MapReduce Assignment – Working with AWS S3 and Hadoop (32 total points) 1 Purpose This assignment is designed to provide you with experience using three related big data technologies. We will first explore the AWS Simple Storage Service, a popular implementation of cloud object storage capability. Then we will configure and start a Hadoop (AWS EMR) cluster to gain some familiarity with two of the core services of this environment. The first Hadoop service we will investigate is the Hadoop Distributed File System (HDFS), a file-based alternative to cloud object storage. Then we will learn to apply the Hadoop MapReduce parallel execution engine and write some programs to process data we will copy to and from our big data stores. 2 Cautions This assignment assume you are capable of coding in the Python programming language. There are some links to Python tutorials in the reading for Module 1 – Big Data Concepts. And numerous additional tutorials can be found on YouTube and other sites on the web. Further, this assignment assumes you have successfully completed all the steps described in Module 1 – Assignment for setting up an AWS account for use during this course. 3 Assignment Submission This assignment will be graded and is worth a maximum of 32 points in total. Your solutions to the following exercises must be contained in single a MS Word,” “pdf,” or “Google Docs” format file and will include screenshots, code, or other results as described below. Upload your file to our Coursera site. Your solutions must be readable (using a reasonably sized font, even for screenshots), explained as needed, and clearly indicate the exercise with which they associated. Also, please include in your submission, our course name and number, your name and other information, such as a student identifier, to ensure you receive proper credit for your work. 4 Exercise #1 (2 points) 4.1 Background Amazon Simple Storage Service (Amazon S3) is storage for the Internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. Amazon S3 stores data as objects within buckets. An object is data (a sequence of bytes) and optional metadata that describes the data. To store data in Amazon S3, you could upload a Linux, MacOS, or Windows file to a bucket where it will be saved as an object. Buckets are containers for objects. You can have one or more buckets in existence in your AWS account. You can control access for each bucket, deciding who can create, delete, and list objects in it. You can also choose the geographical Region where Amazon S3 will store the bucket and its contents and view access logs for the bucket and its objects. This exercise will step you through the process of creating and then working with some S3 buckets. One bucket will be used only during this exercise, and then you will empty and delete it. A second bucket will remain in existence throughout the rest of this course and be used to hold files related to this and upcoming assignments. You pay for storing objects in S3 buckets and not for the mere existence of buckets themselves. And object storage costs only $0.023 per GB per month. 4.2 Bucket Naming Conventions Since each bucket name must be unique across all names assigned to buckets in the AWS cloud, our suggestion is that you apply a unique prefix to each bucket name. For example, a prefix could be the first three letters of your first name, followed by the first three letters of your last or family name followed by your four-digit birth year or something like this. Of course, some people might have very short names, or only a single name, so choose a prefix that makes sense to you. We will refer to buckets by some generic name such as “userprefixwork.” But when you create the bucket, we expect that you will substitute a unique prefix for the string “userprefix”. For example, if you are asked to create a bucket named “userprefixwork” you would instead create a bucket named something like “josros1954work” or similar. Some of the AWS services we will use impose some further constraints on bucket names, as listed below, so take this into account when you design your unique prefix: · Names can consist of lowercase letters, numbers, periods (.), and hyphens (-). · Names cannot end in numbers. 4.3 Creating a Temporary S3 Bucket Here you will create a bucket which you will be used only for the duration of this exercise. The generic name of this bucket will be “userprefixtemp” 1. Sign in to the AWS Management Console (if you have previously signed out). 2. Enter “S3” into the search box at the top of the page and then select S3 from the Services list 3. At this point you might get a “marketing page” about S3 with S3 page menu panel minimized and accessible by clicking on three horizontal bars towards the upper left of the screen. If so, click on those bars to expose the S3 menu panel. 4. With the S3 menu panel displayed, select “Buckets” 5. At this point right hand Buckets panel should appear. This panel contains a section, General purpose buckets which lists all the buckets in your account created directly by you or by AWS services like EMR on your behalf. 6. Going forward, to return to the Buckets panel, get to an S3 service page, for example, by following bullet points 2 and 3 above, or if the Amazon S3 menu panel is visible just select “Buckets” 7. Choose the “Create bucket” button towards the top right of the General purpose buckets section of this panel to open the Create bucket panel. 8. In the General configuration section of this panel, under Bucket type accept the default General purpose type. 9. Now under Bucket name enter “userprefixtemp” as the name for your bucket. Recall that when you actually create the bucket you are going to replace “userprefix” with your own prefix to the name “temp” to ensure the bucket name is unique. So, you will enter something like “josros1954temp” 10. Scroll down to the next section, Object Ownership and accept the default ACL Disabled (recommended) 11. In the next section, Block Public Access settings for this bucket accept the default Block all public access. 12. Keep scrolling down until you see the Create bucket button. Choose Create bucket. You've created a bucket in Amazon S3 13. Upon bucket creation, you will be taken to the Buckets panel, and see a list including the name of the bucket you created. 4.4 Working with a Temporary S3 Bucket 4.4.1 Creating an Object Now that you've created a bucket, you're ready to upload a file from your PC or Mac (or any other computer) and create an object. An object can hold any kind of file: a text file, a photo, a video, and so on. 1. In the General purpose buckets section of the Buckets panel, click on the name of the bucket to which you want to upload your file. 2. The right-hand panel should now display information about the selected bucket. You can always reach this information panel by clicking on the name of a bucket on the Buckets panel 3. On the right-hand side of the Objects section of this panel, click on Upload button. 4. On the following Upload panel, in the Files and folders section click on the Add files button. A file dialog box should appear. 5. Choose a file to upload from your PC or Mac, and then select Open. Just use any file you have handy (even this one). 6. Scroll down to the bottom of the Upload panel and click on the Upload button. 7. You should now see the Upload: status panel. 8. Click on the Close button on the right side of the panel. The file you uploads should now exist and an object in the bucket you created. 4.4.2 Verifying and Object was Created (for Assignment Credit) To receive credit for this question, include a screenshot in your submission document, labeling it “Exercise #1,” showing the bucket information panel listing some named object in the bucket you created. Note, this is the panel that appears after you choose Close from the Upload: status panel. You results should appear similar to this: You can also return to this panel whenever you like by doing the following: 1. Enter “S3” into the search box at the top of the page and then select S3 from the Services list 2. From the menu panel on the left side of the page select the entry Buckets 3. This results in the display of a panel on the right side of the screen, which list the buckets in your account. Click on the name of the bucket you just created, and the panel for which you need to take a screenshot should appear. Not now, but for future reference, to delete any individual object from a bucket: 1. In the General purpose buckets section of the Buckets panel, click on the name of the bucket from which you want to delete a file. 2. The right-hand panel should now display information about the selected bucket. You can always reach this information panel by clicking on the name of a bucket on the Buckets panel 3. In the Objects panel, choose the object that you want to delete (by clicking on the little box to the left of the object name), and then choose Empty. 4. To confirm that you want to delete the object, in the Delete Objects panel, enter the words “permanently delete.” 5. Choose Delete objects button 4.4.3 Emptying and Deleting the Temporary Bucket We strongly recommend that you delete your “userprefixtemp” bucket so that charges do not accrue. Before you delete your bucket, you must empty the bucket (delete the objects in the bucket). After you delete your objects and bucket, they are no longer available. 1. In the Buckets panel, choose the bucket that you want to empty (by clicking on the little circle to the left of the bucket name), and then choose Empty. · To confirm that you want to empty the bucket and delete all the objects in it, in Empty bucket, enter the words “permanently delete.” · Now click on the button “Empty” · Important, emptying the bucket cannot be undone. Objects added to the bucket while the empty bucket action is in progress will be deleted. · From the left-hand panel menu select “Buckets” and the list of buckets should appear · To delete a bucket, in the Buckets list, select the bucket (by clicking on the little circle to the left of the bucket name). · Choose Delete. · To confirm deletion, in Delete bucket, enter the name of the bucket. · Now select the “Delete bucket” button 4.5 Creating a Permanent Bucket Even though we will create clusters that support local Linux storage as well the Hadoop Distributed File System, you will keep assignment files in a second bucket. The reason is that when a cluster terminates, all its resources are decommissioned and anything stored within the cluster will be lost. And you will be terminating your clusters between assignments or when you pause working on a specific assignment. The generic name of the bucket we will use going forward is called “userprefixwork”. Recall, when you create the bucket add your own prefix to the name “work”. To create this bucket just follow the steps outlined in section 4.3 of this document. The existence of an empty bucket costs nothing. 5 Exercise #2 (10 points) 5.1 Background In this exercise we will create a Hadoop cluster in the Amazon cloud (AWS) and explore the use of the Hadoop Distributed file system that it provides. 5.2 Technical References The operating system supported on the EMR cluster primary node, which you will connect to using a terminal via SSH, and with which you will interact, is Linux. If you are unfamiliar with how to work with Linux, do not worry, you can get by with knowing just a few general details and a handful of console commands. And you will never be examined about any aspect of the Linux environment. If you need some background, here are a few references, and more can be found on the web: userprefix Linux Fundamentals: A Training Manual https://ww3.ticaret.edu.tr/aboyaci/files/2016/09/a_unix_primer.pdf Linux Command Line Cheat Sheets https://www.stationx.net/linux-command-line-cheat-sheet/ https://www.guru99.com/linux-commands-cheat-sheet.html 5.3 Creating an EMR (Hadoop) Cluster Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform. that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. To create an EMR cluster, follow the companion document to this one on our Coursera site, as listed below. As you follow the instructions in that document to set up a Hadoop (EMR cluster), make sure to choose the application bundle “Core Hadoop”. This request about choosing an application bundle will become clear as you following the setup instructions. “Module 3 - Getting Started with Amazon EMR” Once you have launched a cluster, and connected to the EMR primary node via your terminal software, you can proceed to the next part of this exercise. 5.4 Additional Setup 5.4.1 Copy Files from Coursera Download following files from our Coursera site to your personal computer: · The “Data” zip file · The “Programs” zip file Now “unzip” (extract) these two files on your personal computer. Together they should expand to the following files: · From the “Data” zip file a. w.data b. x.data c. z.data d. Salaries.tsv · From the “Programs” zip file e. WordCount.py f. Salaries.py 5.4.2 Move the Downloaded Files to an AWS S3 Bucket Upload the unzipped (extracted) files you copied to your PC or Mac to the “userprefixwork” bucket you created previously. 5.4.3 Move Objects (Files) Between an AWS Bucket and Your EMR Primary Node Moving files between S3 buckets and Linux (local) file system on your EMR primary node requires that you use certain Amazon AWS specific commands. These commands are simple, and work as follows: Assume you have an S3 bucket name “userprefixwork” holding an object named “myid.txt”: To copy the object to the Linux directory “/home/hadoop” on the EMR primary node just enter the following on the terminal you have connected (via SSH) to that primary node: aws s3 cp s3://userprefixwork/myid.txt /home/hadoop/myid.txt Now assume you again have an S3 bucket name “userprefixwork” but this time you have a file called “myname.txt” on an EMR primary node in the Linux directory “/home/hadoop”: To copy the file from the Linux directory “/home/hadoop” on the EMR primary node to the bucket “userprefixwork” just enter the following on the terminal you have connected (via SSH) to that primary node: aws s3 cp /home/hadoop/myname.txt s3://userprefixwork/myname.txt Note, this is the way you can copy any files between your “userprefixwork” bucket to the Linux (local) file system of the primary node. Now, using a terminal connected via SSH to the EMR primary node copy the following from your “userprefixwork” bucket to the Linux (local) file system of the EMR primary node into the directory “/home/hadoop”: · w.data · x.data · z.data · Salaries.tsv · WordCount.py · Salaries.py You might wonder “how can I exchange files or objects between S3 buckets and HDFS”. So, please recall that you can do so directly using arguments to the “hadoop fs” command from the terminal connected via SSH to the EMR primary node. 5.5 Working with HDFS All the following interactions with HDFS should occur using a terminal connected via SSH to the EMR primary node. For convenient reference, here is the HDFS command reference: https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/FileSystemShell.html To prevent confusion: the default directory of your Linux account on the Hadoop EMR primary node is “/home/hadoop”. But when we want to copy something to HDFS we will sometimes copy it to an HDFS directory beginning with “/user/hadoop.” Be aware, the Linux and HDFS file system path names have nothing to do with one another. Any similarity in naming (such as the use of the directory name “hadoop”) is just coincidental. 5.5.1 Exercise 2a (1 point) Execute some HDFS command (you needed to figure out which one) to list the files and directories under the HDFS directory listed below: /user Write down the command you executed and also take a screen snapshot of the names of the files or directories that are listed and include it in your assignment submission. Label this “Exercise 2a”. 5.5.2 Exercise 2b (1 point) Execute a command to create the following HDFS directory: /user/hadoop/ where is the name of you previously assigned to your “userprefixtemp” bucket which should be something like josros1954temp. So, in this case you would create an HDFS directory called /user/hadoop/josros1954temp Write down the command you executed and include it in your assignment submission. Label this “Exercise 2b.” 5.5.3 Exercise 2c (1 point) Execute a command to create the following HDFS directory: /user/hadoop/V2 where is again the name of you assigned to your “userprefixtemp” bucket which should be something like josros1954temp. So, in this case you would create an HDFS directory called /user/hadoop/josros1954tempV2 Record the command you executed and include it in your assignment submission. Label this “Exercise 2c.” 5.5.4 Exercise 2d (1 point) Execute a command that copies a given local file (that is a file in the primary node’s Linux file system) to the given HDFS directory: · Source local file: /home/hadoop/x.data · Destination HDFS directory: /user/hadoop/ where is the name of you assigned to your “userprefixtemp” bucket which should be something like josros1954temp. Now execute the following command: hadoop fs –ls /user/hadoop/ Record the command you executed to copy the local file into HDFS, and also take a screen snapshot of the files or directories listed when you executed the above “hadoop fs -ls …” command and include these in your assignment submission. Label this “Exercise 2d.” 5.5.5 Exercise 2e (3 points) Note, Amazon EMR and Hadoop provide a variety of file systems that you can use with EMR. You specify which file system to use with a file system prefix. For example, s3://myawsbucket references an Amazon S3 bucket using EMRFS (EMR file system). See: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-file-systems.html Execute a command that copies a given S3 object to the given HDFS directory. Hint, neither the source or destination of these files/objects is in the local file system, so the “-get” and “-put” commands will not work. · Source S3 object: s3:///z.data · Destination HDFS directory: /user/hadoop/ where is the name of you assigned to your “userprefixtemp” bucket which should be something like josros1954temp. Now execute the following command: hadoop fs –ls /user/hadoop/ Record the command you executed to copy the local file into HDFS, and also take a screen snapshot of the files or directories listed when you executed the above “hadoop fs -ls …” command and include these in your assignment submission. Label this “Exercise 2e.” 5.5.6 Exercise 2f (2 points) Execute a command that copies a file from one HDFS directory to another HDFS directory: · Source HDFS file: /user/hadoop//x.data · Destination HDFS directory: /user/hadoop/V2 where is the name of you assigned to your “userprefixtemp” bucket which should be something like josros1954temp. Now execute the following command: hadoop fs –ls /user/hadoop/V2 Record the command you executed to copy the file form. one HDFS directory to another, and also take a screen snapshot of the files or directories listed when you executed the above “hadoop fs -ls …” command and include these in your assignment submission. Label this “Exercise 2f.” 5.5.7 Exercise 2g (1 point) Execute a command that removes a file from an HDFS directory · HDFS file to remove: /user/hadoop//x.data Now execute the following command: hadoop fs –ls /user/hadoop/ Record the command you executed to remove the file, and also take a screen snapshot of the files or directories listed when you executed the above “hadoop fs -ls …” command and include these in your assignment submission. Label this “Exercise 2g.” 5.6 Completing the Exercise At this point you have two choices. · Terminate your EMR cluster, if you do not plan on immediately working on the next exercise. This will ensure you are not charged unnecessarily for use of your inactive cluster. · Leave your EMR cluster active, and continue on to the next exercise, which also require an active EMR cluster. 6 Exercise #3 (20 points) 6.1 Background In this exercise we will create a Hadoop cluster in the Amazon cloud (AWS) and explore the use of the MapReduce execution engine it provides. Note this exercise assumes you have completed the previous exercise. 6.2 Creating an EMR (Hadoop) Cluster If you have an Amazon EMR cluster active from the previous exercise to use that. Otherwise continue as described below. To create an EMR cluster, follow the companion document to this one on our Coursera site, as listed below. As you follow the instructions in that document to set up a Hadoop (EMR cluster), make sure to choose the application bundle “Core Hadoop”. This request about choosing an application bundle will become clear as you following the setup instructions. “Module 3 - Getting Started with Amazon EMR” Once you have launched a cluster, and connected to the EMR primary node via your terminal software, you can proceed to the next part of this exercise. 6.3 Additional Setup 6.3.1 Files and Directories If you have created a new EMR cluster, then follow the setup instructions in Section 5.4 of this document. If it does not exist, create the following HDFS directory: /user/hadoop/ Make sure to copy the given local file (that is a file in the primary node’s Linux file system) to the given HDFS directory: · Source local file: /home/hadoop/w.data · Destination HDFS directory: /user/hadoop/ If you are using the EMR cluster from exercise #2, you may have some of the HDFS directories described below already created and also some of the files listed below copied to those directories. If so, only create any additional HDFS directories and copy any additional files mentioned. 6.3.2 MrJob Install the mrjob library on your EMR primary node: 1. If you have not done so, ssh to the EMR primary node 2. Enter the command listed below and follow any displayed instructions sudo /usr/bin/pip3 install mrjob[aws] Please review the information on MRJob in the Module 3 Readings, Lesson 5 – MapReduce Programming. Especially reference the The MRJob documentation site and read sections: Fundamental and Writing Jobs. But not every detail is important. I provide you with the exact commands needed to execute mrjob programs below. 6.4 Check Step Here we ensure that your EMR cluster is configured appropriately to execute MapReduce jobs using MRJob. Execute the following: python WordCount.py -r hadoop hdfs:///user/hadoop//w.data Note there must be three slashes in “hdfs:///” as “hdfs://” indicates that the file you are reading from is in HDFS and the “/user” is the first part of the path to that file. Also note that sometimes copying and pasting commands from the assignment document does not work and such commands need to be entered manually. Upon competing execution, check that our MapReduce job produces some reasonable output. If all is well you should see information in the output somewhat similar to (but not exactly like) this when the program finishes correctly: "well" 1 "when" 1 "will" 1 "within" 1 "writing" 2 "your" 5 Note, the above command will erase all output files in hdfs. If you want to keep the output use the following command instead: python WordCount.py -r hadoop hdfs:///user/hadoop/,userprefixtemp>/w.data --output-dir /user/hadoop/words Note, there are two hyphens (dashes) preceding the “output-dir” If you do not see something like the about output, then carefully recheck your setup, possible starting from scratch with a new EMR cluster, or if this fails to resolve your issues, reach out via Coursera for help. 6.5 Working with MapReduce 6.5.1 Editing Python Files Our MapReduce jobs will be coded using program written in the Python language. And there are two options for creating and editing them: · You could create and update your files on a personal computer, using a text editor (and not a word processing program). Then you could copy these files to your “userprefixkork” bucket, and from there to your EMR primary node (/home/hadoop) · Or, you could create or edit a python file on the EMR primary node itself. The editor that is available by default on your primary node is call “vim.” If you are unfamiliar with its use some tutorial material is suggested below (and more is available on the Web): Vim Beginners Guide https://www.freecodecamp.org/news/vim-beginners-guide/ Getting Started with Vim: The Basics https://opensource.com/article/19/3/getting-started-vim Remember unless you copy your files from the primary node to your userprefixwork bucket, they will be lost after you terminate your EMR cluster. 6.5.2 Exercise 3a (3 points) Slightly modify the WordCount.py program. Call the new program WordCount2.py. Instead of counting how many words there are in the input documents (w.data), modify the program to count how many words begin with the lower-case letters a-n (a through n, inclusive) and how many begin with anything else. When you execute this program the output file should look similar to (but not exactly like): a_to_n, 12 other, 21 So, your task is to write a MrJob MapReduce program which again accepts the following file as input hdfs:///user/hadoop//w.data and outputs just two key value pairs, one with key “a_to_n” and an integer value of how many words begin with these lower-case letters, and another key-value pair with key “other” and value how many words begin with some character other than lower-case a-n. Provide a listing of the program you wrote, the command you used to execute it, and a screen snapshot of the output the program generated and include these in your assignment submission. Label this “Exercise 3a.” 6.5.3 Exercise 3b (5 points) Modify the WordCount.py program again. Call the new program WordCount3.py. Instead of counting words, calculate the count of words having the same number of letters. For example, if we have a file consisting of one record of the form. hello there joe our job should output key value pairs similar to (but not exactly like) the following: 3, 1 5, 2 Hint, the key in a key-value pair can be an integer just as well as a string. So, your task is to write a MrJob MapReduce program which again accepts the following file as input hdfs:///user/hadoop//w.data and outputs key value pairs where each one has a key with is some number of characters, and the value a count of words having that many characters Provide a listing of the program you wrote, the command you used to execute it, and a screen snapshot of the output the program generated and include these in your assignment submission. Label this “Exercise 3b.” 6.5.4 Exercise 3c (7 points) Modify the WordCount.py program. Call the new program WordCount4.py. Now we will write a MRJob MapReduce job to calculate the count of unique per record word bigrams. A word bigram is a two-word sequence. For example, if we have a file consisting of records of the form. hello there joe hi there there joe go joe Bigrams for these records are create by sliding a two word “window” across the words of the record. For example each record above has the following word bigrams: hello there joe => “hello there”, “there joe” hi there => “hi there” there joe there => “there joe”, “joe there” joe => Note, this record has no word bigrams Notice, in the above example, there are 2 instances of the word bigram “there joe”. So, your task is to write a MrJob MapReduce program which accepts the following file as input hdfs:///user/hadoop/userprefixtemp>/ w.data and outputs key value pairs where each one has a key which is some word bigram string, and the value a count of the number of occurrences of that word bigram. Note, please convert all words to lower case on input, so Hello and hello become the same word. Our job should output key value pairs similar to (but not exactly like) the following: “hello there”, 1 “hi there”, 1 “joe there”, 1 “there joe”, 2 Provide a listing of the program you wrote, the command you used to execute it, and a screen snapshot of the output the program generated and include these in your assignment submission. Label this “Exercise 3c.” 6.5.5 Exercise 3d (5 points) Now do the same as the above for the files Salaries.py and Salaries.tsv. The “.tsv” file holds department and salary information for Baltimore municipal workers. Have a look at Salaries.py for the layout of the “.tsv” file and how to read it in to our MapReduce program. Copy the Salaries.tsv file to the HDFS directory /user/hadoop/. Execute the Salaries.py program to make sure it works. It should print out how many workers share each job title. To do so execute the following: python Salaries.py -r hadoop hdfs:///user/hadoop//Salaries.tsv Now modify the Salaries.py program. Call it Salaries2.py Instead of counting the number of workers per department, change the program to provide the number of workers having High, Medium or Low annual salaries. This is defined as follows: High 100,000.00 and above Medium 50,000.00 to 99,999.99 Low 0.00 to 49,999.99 The output of the program Salaries2.py should be something like (but not exactly like) the following (in any order): High 20 Medium 30 Low 10 Some important hints: · The annual salary is a string that will need to be converted to a float. · The mapper should output tuples with one of three keys depending on the annual salary: High, Medium and Low · The value part of the tuple is not a salary. (What should it be?) Provide a listing of the program you wrote, the command you used to execute it, and a screen snapshot of the output the program generated and include these in your assignment submission. Label this “Exercise 3d.” 7 Conclusion Remember to… · If you updated any of your Python files in place on your primary node and wish to save them, make sure to copy them back to your “userprefixwork” bucket. · Terminate your EMR cluster (as described in “Getting Started with Amazon EMR”) · Submit the assignment document with your exercise solutions.
MN7031SR Global Strategy and Innovation 2025/26 2. Module Description Small, medium and large business operate in very different ways however they share a common business and industry environment and often will both compete and collaborate. All sizes and types of business must understand their business and industry environment as they position themselves for success. This module focuses on the strategic skills and knowledge needed by executives, managers, intrapreneurs and entrepreneurs and aims to: • development understand and skills in strategy development and in execution • provide the knowledge and skills to scan, analyse and interpret their business environment • develop skills in understanding trends in the business environment and to construct scenarios that can be used to test strategy • develop the ability to manage resources and capabilities and to critically evaluate their importance as sources of sustainable competitive advantage but also as potential sources of disadvantage when disruptive innovation occurs • take strategic decisions based on a deep understanding of their business and the market or markets they serve • foster and exploit innovation through the development of new products and services • critical awareness of the importance of environmental and social responsibility, ethical decision making and the need for effective corporate governance. A key element of the module is participation in the business simulation in which students will compete against each other in teams within in a simulated global market place. This will provide the opportunity to role play executive positions in a global business, take strategic decisions and gain rapid feedback on their validity. 3. Module Learning Outcomes On successful completion of the module students will be able to: 1. Critically evaluate business start-ups 2. Critically evaluate the external environment of a business and identify signals that require action. 3. Critically evaluate the strategy and performance of a business and make strategic recommendations for change 4. Communicate strategic analysis and recommendations clearly and confidently. 4. Module Syllabus/Content Topic Syllabus LO Addressed 1 Introducing Strategy and the Strategy Process a. Strategic Context b. Planned and Emergent Strategy c. Vision and mission 1,2,3,4 2 Starting and Growing a Business a. The importance of knowing the customer b. Lean Start-up and Business Model Canvas c. Stages of Growth and the Need for Systemisation 1,4 3 Business Strategy a. Strategic Position b. Scale and Learning Effects c. Strategic forecasting 1,3,4 4 Environmental Analysis a. Analysing the macro-environment – PEST and PESTLE b. Major trends c. Value chains and value networks 1,2,4 5 Scenario planning a. Rationale b. Developing Scenarios c. Linking Scenarios to strategy 1,2,4 6 Industry Analysis a. Industry definition, boundaries and analysis b. Industry lifecycle models c. Strategic groups and Competitor Analysis 1,2,4 7 Internal Analysis a. Sources of competitive advantage b. Resources and capabilities c. Core competencies 1,3,4 8 Corporate Strategy a. Horizontal and vertical integration b. Related and unrelated diversification c. Mergers, Acquisitions and Strategic Alliances 3,4 9 Innovation and Disruption a. Creative Destruction b. The Innovator’s Dilemma c. Fostering innovation 1,2,3 10 Technology Strategy a. Technology adoption b. Standards wars c. Platform. strategy d. Crowd Sourcing e. Incubators and technology hubs 1,2,3,4 11 Strategy in the Global Environment a. Global strategies b. Entry modes to new geographies c. National competitive advantage d. The role of governments 1,2,3 12 Strategic Change a. Radical and incremental change b. Strategic Alignment c. Theories of Change 3,4 5. Core text De Wit, R & Meyer, R, (2017) Strategy, An International Perspective, Andover, Hampshire: Cengage Learning, 6th ed. 6. Recommended texts Barnett, P., (2019) Sages of Strategic Management: Inside the Minds of the Great Business Thinkers and Strategists, Wiley Christensen, C. M. (2006). The innovator's dilemma: the revolutionary book that will changed the way you do business. New York, HarperCollins Publishers. Grant, R. M. (2012). Contemporary strategy analysis: text and cases. Hoboken, N.J., Wiley. Mintzberg H, Ahlstrand B and Lampel J (2008), Strategy Safari, 2nd Edition, Prentice Hall. Ringland, G. (2006) Scenario planning: managing for the future. 2nd ed. Chichester: Wiley Journals California Management Review London Business School Review Long Range Planning Harvard Business Review MIT Technology Review Sloan Management Review Strategic Management Journal Useful Sources/Links www.bain.com – Bain Management Consultants www.theccc.org.uk/ - Committee on Climate Change www.fhi.ox.ac.uk – Future of Humanity Institute, Oxford University www.mckinsey.com – Mckinsey Management Consultants 6.0 Submission Dates For each assignment you will be given a deadline date and details of how to submit that piece of work. If you miss your submission opportunity without mitigating circumstances, your resit submission grade will be capped. Handing in/submitting Work Submitting work is absolutely vital if you wish to continue on your course. If you know you will have valid reasons for not handing work in on time, you must apply for mitigating circumstances. Below is a quick summary of what to do if you are not able to hand work in. Do approach your Module Leader for advice as soon as possible. All work must be handed in by the assessment date set out in this module handbook. If you are unable to do this, you need to apply for Mitigating circumstances. This is for situations where you will not be able to hand in work on time. If mitigating circumstances is accepted, you will still get the full mark. If mitigating circumstances are refused, you will have to submit your coursework at the re-sit date and the mark will be capped. This means that your maximum mark will be 50%. Mitigating circumstances are for when you have compelling reasons why you cannot hand work in. This could be illness, or particular family circumstances, but would not include ‘pressure of work’. You can download a form. from the University website; you should complete this and hand this in to your module leader with any relevant evidence. A school will setup a discussion panel to consider this application. If the Panel accepts your reasons, you will be able to submit the work, and the mark is not capped - you will receive the full mark awarded for the work. If this is the case, you will be required to submit at the next opportunity- which is usually the summer reassessment date. For further information, please see the university website: Mitigating Circumstances Coursework Submitted Please keep a copy of ALL coursework submitted. You should look at the feedback provided by examiners when marking is completed or after the module results have been published. This is essential if you wish to benefit from the feedback provided by tutors. However please bear in mind that module leaders will not be available while on leave, and this is likely to be the case during periods out of term time. How do I get feedback on my work and when do I get my marks? Feedback is usually provided within two weeks following deadline of submission of assessment (see above dates for assignments and feedback). Students submit their assignment online via Canvas. The feedback will also be provided online. How is my work assessed, marked and moderated? The assessment of your work is a key part of your learning experience, as this is the point at which you put into practice what you have learnt. The assessment of all your work has two key aspects to it. On the one hand, it is about assessing the standard that your work has reached, but it is also about giving you feedback to help you improve in your future work. Where your work also contributes to your final marks and grades it is subject to a rigorous process of evaluation which is outlined below. How is my work marked? The marking process makes sure that our marking of your work is fair and transparent. There is a first marker who has responsibility for giving you formal feedback and making an initial assessment of the standard of your work by giving it a provisional mark. After this there are two further layers of checking and assurance. It is worth noting that this process means that you are unable to appeal your final marks and/or grades on the grounds of academic judgement. How is my work moderated? Internally After all the work for your module has been marked it then goes through a process of moderation. This means that someone entirely separate from the first marker will look through a range of students’ work (a sample) from your module. This person is often referred to as the second marker. They will check the marking and the feedback to make sure that they agree with the first marker’s assessments. If they do not, they will raise their points of difference with the first marker and try to reach agreement. At this stage marks and feedback may be adjusted if necessary. If they cannot agree, a third marker will be brought into the discussion. No marks will be signed off until it has been agreed that the work for your module has been marked to an appropriate standard. Where possible work will be marked anonymously, with no student names being available to any of the markers. While not every piece of work is second marked the size of the sample means that you can be confident that any issues have been identified and resolved. Externally (by an external examiner from outside the University) After internal moderation has taken place, a sample of assessed work from your module will also be seen by an external examiner. This process only occurs for work submitted after Level 4, the first year of an undergraduate degree. An external examiner is a subject specialist recruited from outside the University who is able to take a completely independent view of the work from your module and to confirm that marks at London Met are consistent with those at other universities. The external examiner will look at marking and feedback for your module and reach their own judgement about its quality. If the external examiner has concerns about the general standard of marking or feedback, they are required to report this back to the University and to the Chair of the Subject Standards Board that formally processes the marks. On the rare occasions this happens the marks will not be released until all parties are agreed that the assessment outcomes are appropriate and if necessary it may mean that the work is entirely remarked. It is worth noting that in some subject areas there may be additional processes of marking and moderation due to the nature of the subject and the requirements of professional regulations. Each year the external examiner is required to write a formal report on the work they have seen and also to verify that the University’s proper assessment processes and procedures have been followed. These reports are circulated widely within the University, including, if necessary, to the Vice-Chancellor. They are the subject of discussion by the course teaching teams and are also available for you to see and discuss at Course Committee meetings. The University operates marking and moderation process that are both rigorous and transparent to quality assure that: · Your work is assessed on a fair basis · You are provided with supportive and appropriate feedback · Academic standards in relation to your work and final awards are maintained. Finally, if you would like more detailed information about assessment procedures please consult the academic regulations available on the student zone section of the university website. Assessment Description Assessment 1 (20%) Case Study - The Virgin Group The Virgin Group is one of the UK’s largest private companies. The group included, in 2006, 63 businesses as diverse as airlines, health clubs, music stores and trains. The group included Virgin Galactic, which promised to take paying passengers into sub-orbital space. The personal image and personality of the founder, Richard Branson, were highly bound up with those of the company. Branson’s taste for publicity has led him to stunts as diverse as appearing as a cockney street trader in the US comedy Friends, to attempting a non-stop balloon flight around the world. This has certainly contributed to the definition and recognisability of the brand. Research has shown that the Virgin name was associated with words such as ‘fun’, ‘innovative’, ‘daring’ and ‘successful’. Virgin announced the establishment of a ‘quadruple play’ media company providing television, broadband, fixed-line and mobile communications through the merger of Branson’s UK mobile interests with the UK’s two cable companies. This Virgin company would have 9 million direct customers, 1.5 million more than BSkyB, and so have the financial capacity to compete with BSkyB for premium content such as sports and movies. Virgin tried to expand this business further by making an offer for ITV. Your mark however depends on your team analysis of the Virgin’s position and the strategy for the future. Group size: 3 to 4 members Weighting: 20%, Submission of ppt slides: 10 to 15 with assignment submission form. on 23 April 2025 via Canvas. Presentation will be advice by the respective lecturer. Presentation day: Starting on 23 April September 2024, each group presentation must be video recorded. Submission of video recording on Canvas: 3 days after the group presentation in class. Assessment 2 (80%) Case study - WearWorld plc In Autumn 2022, Joe Smith, Chief Engineer of WearWorld, led a cameraman around the product development labs of WearWorld’s R&D facility just outside Oxford. Like all labs of the company, they were off limits to protect from intellectual property theft. WearWorld has been working on a new product, The Zonna. A device with headphones and a face visor. Zonna - The Product The Zonna is a hybrid headset, utilising Bluetooth technology, and an air-purifying visor. A post covid-19 development of a product category. WearWorld has been known to break down product boundaries through innovation with several success stories in the past. Part of their success was attributed to previous air purification efforts. Joe Smith has been at the forefront of innovation, having invented office equipment, GPS devices, etc. Zonna is a test for Joe. The market for such, wearable, devices is uncertain and appears to be directly affected by the vagaries of the market including the economy and the impact of the recent pandemic, which is lingering on still. WearWorld has been burned in the past when they tried to enter the market for vehicle air purification. They had installed 250 engineers in their facility in Oxford and invested £300 million on the development of an air purifying system (pollen, brake and airborne dust etc.) for car manufacturers to offer as an optional accessory to car buyers. The product failed miserably largely due to the high manufacturing cost and the heavy burden on car battery power requirements. Zonna - The design The WearWorld engineers deployed ingenuity and all their experience in developing the smaller appliance needed for air purification. Internal canals running from each headphone transport a continuous stream of purified air to the nose and mouth within a mouthpiece that does not touch the face. Through many prototypes, engineers finally arrived at a contact-free visor design that covered the nose and mouth. The more air is required, the more power is drained from the gadget’s battery. To achieve a high level of air purification, the engineers developed tiny motors inside the headphone cavity that sucked in outside air, purify it and send it down to the visor. The air filters are capable of removing ultrafine dust particles and pollutants down to 0.1 microns. The visor, connected to headphones, was another breakthrough achieved after substantive testing and R&D investment. The battery was designed to provide effective power for up to 3 hours (music, external noise cancelling and air purification) under normal use (say when commuting). During sports pursuits, with the increased exertion, the battery could only last for about 1 hour. The rechargeable battery is replaceable and WearWorld is thinking to offer replacements on a subscription basis. Joe was working on an improved battery but this would probably take 18 months or so. Having invested £25 million in product development, they need to launch urgently… Finally, the product’s manufacturing cost would depend heavily on the production levels. The engineers have costed Zonna as follows: Production (units) Cost per unit made (£) 10,000 450 50,000 370 100,000 325 500,000 250 1,000,000 145 Zonna – Your Brief WearWorld plc invited you - as a Business Strategy Consultant - to prepare a Business Plan specific to the Product. The Board will meet and review your proposal after 25th September 2024. The CEO of WearWorld plc wants to know the following, specifically (See Marking Scheme below): · The specific customer segment(s) for the products (quantified in terms of numbers and profiles) · The clear value proposition (how different is the product) for each segment · The competitors (direct and indirect) · Marketing Strategy – e.g. Porter's generic strategy (Product, Pricing, Promotion & Distribution Channels) · Promotional Strategy · Product-specific financial forecast for 3 years (Sales Units, Revenue streams, Costs production and marketing) · Spin-off products or services (like battery subscriptions, extended warranties etc.) Group size: Individual Work Weighting: 80% Word count: 2,500 +/-10% Submission Date: 4 June 2025 at 1159 hours Assessment Method Description of Item % weighting Due on Outcomes Coursework Group Presentation (ppt slides maximum 10 to 15) 20% 23 April 2025 1,2,3 Coursework Individual Student Report (maximum 2,500 +/- 10% words) 80% 4 June 2025 1,2,3,4
INFS7900 Assignment 2 – Module 3 Code Due: 9 May 2025 @ 3:00 PM AEST Oral Assessment: Week 12, 19-23 May 2025 Weighting: 25% Overview The purpose of this assignment is to test your ability to use and apply SQL concepts to complete tasks in a real-world scenario. Specifically, this assessment will examine your ability to use SQL Data Manipulation Language to return specific subsets of information that exist in a database and Data Definition Language to create a new relational schema. The assignment is to be completed individually. Submission Assignment 2 is made up of two parts. · Part 1 will be submitted through an electronic marking tool called Gradescope, which will also be used for providing feedback. · Part 2 is an oral assessment that will be completed during an in-person interview with a tutor during a practical session in Week 12 (after your Gradescope submission). Details below: Part 1: Answer the questions on this task sheet and submit them through an electronic marking tool called Gradescope. For this assignment, you will need to submit two types of files to the portal: · Query Files: o For each question in Sections A, B and C, and D where indicated, you are required to submit a separate .sql or .txt file which contains your SQL query solution for that question (submit only one of these file types). o Each file should only contain the SQL query(s) and no additional text or comments. o Each file should be named as per the Filename description in the question. o When submitting files to the autograder, select all of your .sql or .txt files as well as your .pdf file. o It is recommended to write queries in a SQL/text file and test them in your phpMyAdmin zones before copying them into the Word document. o Your queries must compile using MySQL version 8.0. This is the same DBMS software as is used on your phpMyAdmin zones. o You may use any MySQL function that have been used in class in addition to those specified in the questions. · Assignment PDF: o Insert your answers for all Sections A-D into the template boxes on the Microsoft Word version of this assignment task sheet where appropriate. Export this document to a PDF and also upload it to the Gradescope autograder portal. o Only subsections of Section D will be partially hand-marked from your PDF submission, however this is also a backup for Sections A, B and C in case of autograder failure. o For Sections A, B and C, include a screenshot of the output of your query for each question in the space provided. Use your zones to generate the output. o Please name your file ‘Assignment_2.pdf’. Please do not alter the format or layout of this document and ensure the name and SID boxes are completed. Part 2 is an oral assessment, to verify your understanding of the code you submitted in Part 1 Sections A, B and C. · This will be an oral critique of your submitted code. In a short meeting with a member of the teaching staff during Week 12 practical sessions, you will explain the work you have submitted in Part 1 and discuss your choices. · All oral assessments must be given live and will be recorded by the teaching team (i.e. on Zoom) for archiving purposes. Marking Assignment 2 is worth 25 course marks, and marking is made up of two parts. First, the marks available per section of Part 1 are as follows (note that INFS1200 differs from INFS7900): INFS7900 Section A – SQL DML (SELECT) 13 marks Section B – SQL DML (UPDATE, INSERT, DELETE) 3 marks Section C – SQL DDL 3 marks Section D – Critical thinking 6 marks Given these available marks, students must also achieve a pass (+/-) in Part 2, the oral critique, to be eligible to pass Assignment 2. Failure in Part 2 will result in your mark being capped at 12.5 marks (50% for this assignment). Grading and Autograder feedback: Sections A, B, C and parts of D of this assignment will be graded via an autograder deployed on Gradescope. However, we reserve the right to revert to hand marking using the PDF submission should the need arise. When you submit your code, the autograder will provide you with two forms of immediate feedback: · File existence and compilation tests: Your code will be checked to see if it compiles correctly. If it fails one or more compilation tests, the errors returned by the autograder will help you debug. Note that code that fails to compile will receive 0 marks. No marks are given for passing the compilation tests. · Column Domain checking: The autograder will check whether the domain of the projected values for each attribute in your query results matches the expected domain. Make sure to carefully read the 'explanation' section in each question of the assignment, as it provides specific details about the expected domains for attributes in the result. Submit your work to Gradescope early so you can use the test results to ensure your queries are compiling and you are on the right track. You will be able to resubmit to the autograder an unlimited number of times before the deadline. Materials provided: You will be provided with the database schema and a simple data instance that will yield a result for every correct DML (SELECT) question. Visible test results in the autograder submission portal will check that the query compiles successfully. Because the autograder uses the same DBMS as your zones, you are encouraged to use your zones to develop your assignment answers. Late penalties: Please consult the course profile for late penalties that apply to this assessment item. Plagiarism The University has strict policies regarding plagiarism. Penalties for engaging in unacceptable behaviour range from loss of grades in a course through to expulsion from UQ. You are required to read and understand the policies on academic integrity and plagiarism in the course profile (Section 6.1). If you have any questions regarding an acceptable level of collaboration with your peers, please see either the lecturer or your tutor for guidance. Remember that ignorance is not a defence! You are permitted to use generative AI tools to help you complete this assessment task. However, if you do, please provide complete copies of your interactions with the AI tool in the space provided at the end of your submission. Please note that if you use generative AI but fail to acknowledge this by attaching your interaction to the end of the assignment, it will be considered misconduct, as you are claiming credit for work that is not your own. Task For this assignment, you will be presented with the simplified schema of an authorisation application. BestTechLtd have designed and developed a simplistic authorisation management system to provide secure and sufficient access control within their organisational network. When a new employee joins an organisation using SecureAccess, they are registered and their personal details are stored in the Employee table. Administrative employees, whose additional information is stored in the AdministrativeEmployee table, can then assign appropriate roles to these employees through a role-granting process. Each role, defined in the Role table, comes with specific permissions that determine which company websites and resources the employee can access. This hierarchical permission structure ensures that employees only have access to the resources necessary for their job functions. BestTechLtd manage their authorisation by storing employee and permission data in a relational database management system (RDBMS) with the following schema: · The Employee table records staff-specific information. · Administration employees are a specific type of employee that have authority to grant authorisation roles to employees. · The Role table records typical roles that might exist within a business and is associated with the necessary permissions that a user (Employee) of the role will need. · The Permission table maintains a record of all the websites in a company’s network. A website URI and RoleID associated with the website means that the Role is granted permission to access, retrieve resources and perform. operations on the website. ERD for the BestTechLtd schema:
158.739-2025 Semester 1 Assessment 3 and Assessment 4 :Hand in by midnight May 25 20 3 rade).ProjectEvaluation100 marks (50% ofyour finalcoursegrade).WorkThis assignment may be done in yoursubmission, :Re-enforce and build on data wrangling skills learne the full process ofdata acquisition, data wrangling, data integration,data persistence usingSQLite, and data analysis u Assessment 3 and 4 overarching outline: The goal of these projects is the implementation of a full data analysis workflow using python with the combination of SQLite database persistence. You are asked to preferably choose a problem domain that is aligned with your specialisation within the Master of Analytics (if relevant); otherwise, select a domain of interest to you. You may re-use some of the datasets from the previous assignment. Research what kinds of data sources are available for your selected domain. Subsequently, you are asked to (1) formulate questions that you would like answered, (2) acquire datasets from at least two different sources (at least one source must be dynamic, i.e. is web-scraped or is retrieved from a web API), (3) wrangle the data into an usable format and perform. EDA, (4) integrate datasets into one, (5) persist the data into a SQLite relational database with a suitable schema, (6) perform. group-by queries, pivot tables, cross-tabulation of the data to answer your research questions, together with a rich set of visualisations. Links to various dataset and web API repositories are provided on Stream. The analysis workflow you are asked to perform. is illustrated in the diagram below: Assessment 4 Requirements: Your research report must be in a Jupyter Notebook format and thus executable and repeatable. Clearly introduce your problem domain, articulate your research questions and provide an executive summary at the beginning. Follow the provided Jupyter notebook template. You must document and explain the reasoning behind the coding steps you are taking and provide explanations of all your graphs and tables as is appropriate. Make sure you label all aspects of your graphs. The activities listed under the five stages in the workflow diagram above are a guide only. This means that operations like group-by statements as well as pivot tables could be a part of the ‘Data Wrangling’ phase as EDA, and not only a part of the data analysis phase. Finally, please run your report through an external spell checker and feel free to use ChatGPT judiciously to help you as discussed in class. Assessment 4 Marking criteria: Marks will be awarded for different components of the project using the following rubric: ksfor using bothAPIs and web scraping – penalties will be applied for re-usingexa ffindings• Rich interpretation and communication offindings andvisualisati ition and approachtotheanalysis• Creativity in problemsolving• The degree ofchallenge undertakenBONUSBig Data ProcessingTechniques5• Demonstration ofout-of-core processing• Analysis ofquery performance issues and optimisations where necessaryComponentMarksRequirements and expectationsSchema Definition35• design ofa DB schemadocumentandit t captureall thedata• use ofcorrect data t itingdataintoSQLitetables• performing checking that the data has been persisted i LECTstatements)• diversityofqueries• readability and structur sultsDB Views10• creation oftwoDB Views• testing out theviews
School of Engineering ENGMP512-25A: Advanced Materials Manufacture Assignment The assignment (20% of the overall mark) is divided into two parts: a short literature review (15%) and flipped learning presentation (5%). Short literature review This assignment requires you to conduct a short review of the current literature of one of the topics listed below which are connected to the material learnt in this paper and in other materials related papers in the overall study program. The assignment should consider the scientific principles and relationship between processing, properties and microstructure of materials analysing the latest advancement available in scientific journal articles. Topics: Processing of Composites Processing of Ceramics Solidification Processing Metallic Powder Consolidation Useofbarium2. nealloysofCobasedalloys3. Grainrefinementof3.Sintering ofHighhydroxyapatitealuminium alloysEntropy Alloys4.Sintering of Si3N4-4. Directional4. Dprintingoflight5. Architectured5. Al-basedcompositescoatingsmetalsmetallic structures
Research School of Finance, Actuarial Studies and Statistics ASSIGNMENT Semester 1, 2025 STAT7055 Introductory Statistics for Business and Finance INSTRUCTIONS TO STUDENTS Due Date • The assignment is due at 9:00am on Friday May 9. • Late submission of the assignment is not permitted. An assignment submitted without an extension after the due date will receive a mark of 0. Writing your Assignment • The assignment is an individual piece of assessment and must be completed on your own. • You are not permitted to use any form of tutoring services (e.g., online, in-person, etc.) or any AI tools (e.g., ChatGPT, etc.). • You will be required to write a report in an R Markdown document that contains R code (with R code comments), R output and written text. An example of an R Markdown document, which you can use as a template, has been provided on Wattle. • All R code must have accompanying R code comments that sufficiently describe what the code is doing. • When answering the assignment questions in your report, you will need to include all your R code and R output that you used to calculate any answers and you must also write your answers in proper sentences. For example, if you are required to calculate a sample mean, then you would include your R code for calculating the sample mean and the R output of the sample mean value and you would also write a proper sentence in the report such as “The sample mean is equal to ...” . • Make sure to be clear and concise in your answers. • A good way to approach writing your report is to imagine that you are a statistical consultant and that a client has asked you to do some statistical analyses. When presenting the results of your analyses to the client, you wouldn’t just give them pages of R code, R output, calculations, etc. Rather, you should give them a proper report which clearly outlines and explains the results of the analyses and which also includes the R code and R output used to produce the results. • Therefore, presentation is very important. Marks will be deducted for poorly presented reports. • Once you have finished writing your report in your R Markdown document, you will need to render the document by pressing the Knit button in RStudio to create a HTML file of your report. • Further to the above point, it is good practice to regularly Knit your R Markdown document as you write your report. This is useful for checking that it’s rendering properly. Submitting your Assignment • Submission of the assignment will be through Wattle and further details regarding assignment submission will be provided on Wattle. • For submission you will need to submit two files: the R Markdown file of your report (i.e., a “ .Rmd” file) and the rendered HTML file of your report produced by pressing the Knit button in RStudio (i.e., a “ .html” file). • Please name your two files as “uNNNNNNN.Rmd” and “uNNNNNNN.html”, where uNNNNNNN is your student number. • No other file types will be accepted or marked, e.g., “ .R”, “ .docx”, “ .RData”, “ .zip”, etc. In particular, do not submit any compressed files. Other Important Details • You may only use built-in functions available in the default installation of R and you are not permitted to use functions in any additional R packages (e.g., ggplot2). • You must use the appropriate R functions (and not the statistical tables) to calculate any critical values or p-values used when performing any hypothesis tests. • You must use R for all calculations. • Round all final numeric answers to 4 decimal places. However, as you will be using R, keep all decimals during all intermediate steps to ensure the accuracy of your final numeric answer. • Please use the help function if you want to learn more about a particular R function, e.g., enter help(mean) in the R console to learn more about the mean function. • For questions that require writing mathematical symbols, you are welcome to use short- hand notation, provided you make the meaning clear (e.g., using “Mu” for μ , or “!=” for ). • Answers (including hypotheses, explanations, conclusions, etc.) need to be written in the text of the R Markdown document and not in the R code comments or the R output. • Do not print out entire data sets in your R Markdown document or HTML file, as this will only take up unnecessary space. Question 1 [11 marks] A zoologist is studying a particular species of deer. A random sample of deer were selected and for each deer, their weight in kilograms (X1), their height in centimetres (X2), their sex (X3) and the region in which they live (X4) were recorded. You have been asked to perform. some analyses on this data. The zoologist is certain that the population standard deviation of deer heights is equal to 14 centimetres, regardless of the region in which they live, and she is happy for you to assume that this is true. Further, she has a suspicion that deer weights are normally distributed for both males and females, but she is less certain of this. The data are stored in the file AssignmentData .RData in the data frame deer .df. (a) [2 marks] Construct a single histogram to describe the distribution of the male deer weights. Make sure to give your plot a proper descriptive title and an appropriate axis label (do not just use the default title or label). Based on this histogram, comment on whether the zoologist’s claim about the distribution of deer weights is correct. Make sure to provide a clear justification for your answer. (b) [3 marks] Test whether the population mean weight of male deer that live in the forest that are shorter than 77.15 centimetres is less than 75.1 kilograms. Clearly state your hypotheses, making sure to define any parameters, and use a significance level of α = 6%. Do not use any R functions that are designed to perform. hypothesis tests. (c) [3 marks] Test whether the population mean height of deer that live in the forest is greater than the population mean height of deer that live in the mountains by more than 1 centimetre. Clearly state your hypotheses, making sure to define any parameters, and use a significance level of α = 8%. Do not use any R functions that are designed to perform hypothesis tests. (d) [3 marks] Among female deer, test whether the population proportion that live in the plains is equal to 0.35. Clearly state your hypotheses, making sure to define any parameters, and use a significance level of α = 9%. Do not use any R functions that are designed to perform hypothesis tests. Presentation [4 marks] Marks will be allocated for how well presented your report is, e.g., clear and distinct headings, concise answers with information clearly communicated, all R code sufficiently commented, etc.
Master of Business Administration Assignment Submission Form Module Code: MMN7031SR Module Title: Global Strategy and Innovation Assessment Title: Assessment 2: Individual Report Assessment due date: June 2025 MN7031SR Global Strategy and Innovation Academic Year 2025/26 Assessment 2, Individual Report Word counts: 2,500 (+/-10%) First Marker: Second Marker: Title of report: Business Plan of WearWorld Assessment criteria Level of achievement 1st Marker 2nd Marker Executive Summary and Recommendations (15 marks) An overview of the key points and recommendations. Customer Value Proposition (10 marks) A report on · The product offering · Consumer problem · Unique selling proposition of the product(s) Customer Segmentation (15 marks) A report on detailed customer segment(s) of the product(s) offering Marketing Strategy (25 marks) A Strategic plan with · Porter’s Generic Strategy Financial Forecasts (20 marks) A financial report for 3 years including · Forecast of sales (in units) · Revenue projection · Cost of production Evidence of research (5 marks) Provide evidence of secondary research Presentation (10 marks) 1. Appropriate academic writing and language 2. In-text citation and referencing Total (100 marks) Areas for Improvements From First Marker Knowledge and understanding Analysis and evaluation From Second Marker Knowledge and understanding Analysis and evaluation Agreed Mark First marker’s marks/date: Second marker’s marks/date: