Assessment Proforma 2024-25 Key Information Module Code CMT316 Module Title Applications of Machine Learning: Natural Language Processing and Computer Vision Assessment Title Coursework 1 Assessment Number 1 of 2 Assessment Weighting 50% Assessment Limits No limit for part 1 (but the answers are expected to be quite brief). Part 2 involves producing a report of up to 1200 words along with Python script. The Assessment Calendar can be found under ‘Assessment & Feedback’ in the COMSC-ORG-SCHOOL organisation on Learning Central. This is the single point of truth for (a) the hand out date and time, (b) the hand in date and time, and (c) the feedback return date for all assessments. Learning Outcomes The learning outcomes for this assessment are as follows: 1. Implement and evaluate machine learning methods to solve a given task 2. Explain the fundamental principles underlying common machine learning methods 3. Choose an appropriate machine learning method and data pre-processing strategy to address the needs of a given application setting 4. Reflect on the importance of data representation for the success of machine learning methods 5. Critically appraise the ethical implications and societal risks associated with the deployment of machine learning methods 6. Explain the nature, strengths and limitations of an implemented machine learning technique to an audience of non-specialists Submission Instructions The coversheet can be found under ‘Assessment & Feedback’ in the COMSC-ORG- SCHOOL organisation on Learning Central. All files should be submitted via Learning Central. The submission page can be found under ‘Assessment & Feedback’ in the CMT316 module on Learning Central. Your submission should consist of multiple files: This coursework consists of a portfolio divided into two parts - Part (1) consists of selected homework similar to the one handed in throughout the course. The final deliverable consists of a single PDF file, which may include methodology, snippets of Python code and solved exercises. - Part (2) consists of a machine learning project where students implement a basic machine learning algorithm for solving a given task. The deliverable is a zip file with the code, and a written summary (up to 1200 words) describing solutions, design choices and a reflection on the main challenges faced during development. Description Type Name Coversheet Compulsory One PDF (.pdf) file Coversheet.pdf Part 1 Compulsory One PDF (.pdf) file Part1.pdf Part 2 Compulsory One ZIP (.zip) file containing the Python code Part2.zip Part 2 Compulsory One PDF (.pdf) file containing the part 2 report Part2.pdf Any code submitted will be run in Python 3 (Linux) and must be submitted as stipulated in the instructions above. Any deviation from the submission instructions above (including the number and types of files submitted) may result in a mark of zero for the assessment or question part. If you are unable to submit your work due to technical difficulties, please submit your work viae-mailto [email protected] notify the module leader. Assessment Description In this coursework, students demonstrate their familiarity with the topics covered in the module via two separate parts with the first part worth 40% and the second part worth 60%. Part 1 (40%) In Part 1, students are expected to answer two practical questions. 1. Your algorithm gets the following results in a classification experiment. Please compute the precision, recall, f-measure and accuracy *manually* (without the help of your computer/Python, please provide all steps and formulas). Include the process to get to the final result. (15 points) Id Prediction Gold 1 True True 2 True True 3 False True 4 True True 5 False True 6 False True 7 True True 8 True True 9 True True 10 False False 11 False False 12 False False 13 True False 14 False False 15 False True 16 False False 17 False False 18 True False 19 True False 20 False False 2. You are given a dataset (named “real_estate”) with different house properties (dataset available in Learning Central). Your goal is to train machine learning models in the training set to predict the house price of a unit area in the test set. The problem should be framed as both regression and classification. For regression, the house price of a unit area is given; for classification, there would be two labels (expensive and not- expensive) depending on the house price of a unit area: expensive if it is higher or equal to 30, and not-expensive if it is lower than 30. The task is therefore to train two machine learning models (one regression and another one classification) and check their performance. The student can choose the models to solve this problem. Write, for each of the models, the main Python instructions to train and predict the labels (one line each, no need to include any data preprocessing instructions in the pdf) and the performance in the test set in terms of Root Mean Squared Error (regression) and accuracy (classification). While you will need to write the full code to get to the results, only these instructions are required in the pdf. (25 points) Part 2 (60%) In Part 2, students are provided with a text classification dataset (named “bbc_news”). The dataset contains news articles split into five categories: tech, business, sport, politics and entertainment. Based on this dataset, students are asked to preprocess the data, select features and train and evaluate a machine learning model of their choice for classifying news articles. Students should include at least three different features to train their model, one of them should be based on some sort of word frequency. Students can decide the type of frequency (absolute or relative, normalized or not) and text preprocessing for this mandatory word frequency feature. The remaining two (or more) features can be chosen freely. Then, students are asked to perform. feature selection to reduce the dimensionality of all features. Note: Training, development and test sets are not provided. It is up to the student to decide the evaluation protocol and partition (e.g., cross-validation or pre-defining a training, development and test set). This should be explained in the report. Deliverables for this part are the Python code including all steps and a report of up to 1200 words. The Python code should include the Python scripts and a small README file with instructions on how to run the code in Linux. Jupyter notebooks with clear execution paths are also accepted. The code should take the dataset set as input, and output the results according to the chosen evaluation protocol. The code will consist of 25% of the marks for this part (15 points) and the report the remaining 75% (45 points). The code should contain all necessary steps described above: to get the full marks for the code, it should work properly and clearly perform all required steps. The report should include: 1) Description of all steps taken in the process (preprocessing, choice of features, feature selection and training and testing of the model). This description should be such that one could understand all steps without looking at the code (15 points - The quality of the preprocessing, features and algorithm will not be considered here) 2) Justification of all steps. Some justifications may be numerical, in that case a development set can be included to perform. additional experiments. (10 points - A reasonable reasoned justification is enough to get half of the marks here. The usage of the development set is required to get full marks) 3) Overall performance (accuracy, macro-averaged precision, macro-averaged recall and macro-averaged F1) of the trained model in the dataset. (10 points - Indicating the results, even if very low, is enough to get half of the marks here. A minimum of 65% accuracy computed on a test set of at least 10% of the dataset is required to get full marks) 4) Critical reflection of how the deliverable could be improved in the future and on possible biases that the deployed machine learning may have. (10 points - The depth and correctness of insights related to your deliverable will be assessed) The report may include tables and/or figures. Assessment Criteria Credit will be awarded against the following criteria. Credit will be awarded against the following criteria. ➢ Part 1. The main criteria for assessment is based on the correctness of the answers, for which the process is also required. Full marks will be given for answers including both the correct answer and a correct justification or methodology. ➢ Part 2. This part is divided into Python code (25%) and an essay (75%). The code will be evaluated based on whether it works or not, and whether it minimally contains the necessary steps required for the completion of Part 2. Four items will be evaluated in the essay, whose weights and descriptions are indicated in the assessment instructions. The main criteria to evaluate those items will be the adequacy of the answer with respect to what was asked, and the justification provided. High Distinction 80%+ Full understanding of all the concepts, correct answers and methodology, well-documented and working code, accurate justification and description of all steps and critical analysis. Distinction 70-79% Excellent understanding of all the concepts, correct answers and methodology, well-documented and working code, accurate justification and description of the steps with critical analysis. Merit 60-69% Good understanding of all the concepts, working code, justification and description of steps and analysis. Pass 50-59% Few errors in questions and code, methodology with issues and not detailed description of steps and justification or with issues Marginal Fail 40-49% Reasonable attempts to address the problem, but code with clear errors, flawed methodology, incorrect solutions, and lack of clear description of justification of steps. Fail 0-39% Code with errors, flawed methodology, incorrect solutions, and no clear description of justification of steps.
21-259: Calculus in Three Dimensions Lecture #5 Spring 2025 Vector Functions and Space Curves Definition: A vector function r (t) : R→R n is a function whose domain is a set of real numbers and whose range is a set of vectors. For n = 3, r (t) = = f (t)ı + g (t) ȷ +h(t)k is a vector function. The scalar functions f (t), g (t), and h(t) are component functions of r (t). Example 1. Find the domain of the vector function r (t) = . If r (t) = , then provided the limits of the component functions exist. Example 2. Find A vector function r (t) is continuous at t = a if r (t) = r (a). Definition: Let f , g , and h be continuous functions on an interval I. Then the set C of all points (x, y, z) in space, where x = f (t), y = g (t), z = h(t), is called a space curve. The equations above are the parametric equations of C, and t is called a parameter. Any continuous vector function r (t) defines a space curve. Example 3. What are the parametric equations of a circle of radius a in the x y-plane, centered at the origin? Example 4. What is the curve given by r (t) = cos tı +sint ȷ + tk? Example 5. Find a vector function that represents the curve of intersection of the cylinder x2 + y2 = 1 and the plane y + z = 2. Example 6. Find a vector function that represents the curve of intersection of the paraboloid z = 4x2 + y2 and the parabolic cylinder y = x 2 . Calculus of Vector Functions Definition: The derivative r′ of a vector function r is given by provided the limit exists. For any value of t, the vector r′ (t) is the tangent vector to the curve defined by r, provided that r′ (t) exists and r′ (t) ≠ 0. The vector is the unit tangent vector to r (t). Theorem: If r (t) = = f (t)ı + g (t) ȷ + h(t)k, where f , g , and h are differentiable functions, then r′ (t) = = f′ (t)ı + g′ (t) ȷ +h′ (t)k. Example 7. For the vector function r (t) = , find r′ (t), and find T (0). Example 8. Find parametric equations for the tangent line to the curve x = lnt, y = 2√t, z = t 2 at the point (0,2,1). Theorem: Suppose u and v are differentiable vector functions, c is a scalar, and f is a real-valued function. Then 1. dt/d (u(t)+ v(t)) = u ′ (t)+ v ′ (t) 2. dt/d (cu(t)) = cu′ (t) 3. dt/d (f (t)u(t)) = f (t)u′ (t)+ f′ (t)u(t) 4. dt/d (u(t)· v(t)) = u(t)· v′ (t)+u′ (t)· v(t) 5. dt/d (u(t)× v(t)) = u(t)× v′ (t)+u′ (t)× v(t) 6. dt/d (u(f (t))) = f′ (t)u′ (f (t)) Example 9. If r (t) ≠ 0, show that Example 10. Show that if |r (t)| = c (where c is a nonzero constant) then r′ (t) is orthogonal to r (t) for all t. Example 11. Show that if r is a vector function such that r′′ exists, then Definition: The definite integral of a continuous vector function r (t) = can be defined in much the same way as for real-valued functions except that the integral is a vector: Example 12. Evaluate
Java Lab 17: More Collections This lab uses Collections. Create a project named Lab17. Copy the classes and data file from Lab16. For the problems that re-write the search methods, you should comment out the old code, just in case, so things still work. 1. Create and new up three Map objects as HashMap's: byNameMap, byYearMap, and byGenreMap. 2. In the readMovies( ) method, along with populating the ArrayList of Movies, also populate byNameMap. The key is the Movie name and the value is the Movie. Do this in the same while loop. If there are duplicate names, just re-insert the movie – this will lose some data, but let's go with that. 3. Rewrite the searchByName( ) method to use byNameMap for the lookup instead of searching the movieList. create and return an ArrayList with just one movie, the one returned from byNameMap (so it doesn't break the other code). Test this to make sure it's working. 4. Again in readMovies( ), populate byYearMap, still in the same while loop. The key is the Movie year and the value is an ArrayList of Movie objects. Do the following steps: - if the year is already in byYearMap, get its value – this is an ArrayList – and add the current Movie object to that ArrayList. Because Maps return a reference to the ArrayList, you don't have to reinsert it (yep, it breaks encapsulation for the Map). In fact, see if you can do this in one line. - if the year is not yet in byYearMap, create a new ArrayList, add the current Movie object to it, and insert this year and ArrayList into byYearMap. 5. Rewrite the searchByYear( ) method to use byYearMap for the lookup instead of searching the movieList. Ignore the MOVIE_COUNT limit –this time, return *all* of the movies for this year. Test this to make sure it's working. 6. One more time: in readMovies( ), in the while loop, populate byGenreMap. The key is the genre and the value is an ArrayList of Movie objects. Do this the same way you populated byYearMap, but with the following twist: write a for loop over the current movie's genres – remember, this is an ArrayList of genres. For each genre in that list, follow the steps you used for byYearMap, but using the current genre. 7. Rewrite the searchByGenre( ) method to use byGenreMap for the lookup instead of searching the movieList. Ignore the MOVIE_COUNT limit –this time, return all the movies in this genre. 8. Write a method named displayTotals( ) that prints out the number of items in each data structure (movieList, byNameMap, byYearMap, and byGenreMap). Call it in the main program, after the user chooses Quit. Here's the output of mine:
Mathematics Analysis and Approaches Internal Assessment Standard Level Investigating the Presence of the Golden Ratio in Architectural Landmarks Research question: To what extent does the design of architectural landmarks align with the Golden Ratio? 1. Introduction Context and Relevance The Golden Ratio of 1.618 is widely associated with beauty and harmony,presenting the unique visual enjoyment in nature. After finding out the mystery of Golden ratio, more designers apply this principle in their works like architecture and painting. As an IB Visual Arts student, I compared the paintings of artists who specialised in using the Golden Ratio to create beauty in their compositions. I was inspired by the Golden Ratio and combined it with mathematical principles to bring out how it has influenced art and architectural design. It has been historically associated with both natural phenomena and man-made structures. From the Parthenon in Ancient Greece to the Taj Mahal in India, the use of the Golden Ratio has been the subject of debate. Due to its abstract nature, some scholars have argued that its appearance is a purposeful design choice, while others insist that its appearance is a mathematical coincidence or a man-made explanation. By examining these claims, this study reveals the connection between mathematics and art in world-famous architecture. Objective This investigation explores whether famous architectural structures exhibit dimensions approximating the Golden Ratio ( ϕ ) and whether this alignment is intentional or coincidental. By analyzing key dimensions in several landmarks, this paper is aimed to provide geometrical insights into the historical and aesthetic role of typical architectural design. Mathematical Tools Using ratios of the architectures to show the proportions of the line segments in the buildings. GeoGebra were used for geometric analysis, while spreadsheets were employed for ratio calculations and statistical analysis. This investigation measures and compares the dimensions from selected landmarks to ϕ, the alignment and intentionality are measured in terms of deviations from ϕ . Background theory 2.1 Definition of the Golden Ratio The Golden Ratio is defined as: The golden ratio (ϕ), also known as Phi, is derived from the Fibonacci sequence and represents a unique proportionality where the ratio of larger to smaller parts is the same as the ratio of the whole to the larger parts.Where a and b are lengths such that a > b > 0. By deducing the formula, ϕ can be defined as the terms of itself[1]. Table 1 Calculation of ϕ The ratio ϕ is unique because of its self-similarity and its role in the Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55..., where the ratio of successive terms converges to ϕ. As larger numbers are used, the ratio between two number is more close to 1.618,for instance 21/13=1.615,55/34=1.618. When the golden ratio is extended to two dimensional representations (xy, xz, or yz), the golden rectangle is obtained. A rectangle is considered a golden rectangle if the ratio of its long side to its short side is equal to φ. More geometric shapes are created,like the Golden Triangle, Golden Rectangle and Golden Spiral[2]. A golden rectangle refers to a rectangle with side lengths in the golden ratio, namely one to φ (phi), approximately in the ratio of 1:1.618. The golden rectangle has never been utilized to analyze the generative two dimensional elements of the three dimensional forms of buildings. Nearly all analyses in this field were centered on the overall dimensions of the elevation or the arrangements of the two dimensional elements, with the third dimension being neglected. Golden Spiral is based on the proportion of the golden ratio,this is made by doubling/having the square. The irrational number combined with the symmetry, so that the diagonal of the halves will be further produce a new square by extension in the ratio 0.618 to the original width of the previous square. This results in the rectangle which the ratio of width and ratio is 1:1.618[1]. 2.2 Applications in Architecture Throughout history, architects, philosophers, and mathematicians also used and recommended other proportional systems. For instance, Plato attached great significance to two geometrical ratios: the double progression of 1:2:4:8 and the triple progression of 1:3:9:27, which are applied in musical proportions. Architectural design often reflects proportional systems, with the Golden Ratio being a popular candidate for aesthetic harmony. Since the golden section were shown in many nature beauty such as flowers,seashells as well as some fruits, some artists may even refer to the nature to design. In many architecture design, proportional ratios are included to describe the insights since the qualities and quantities are perceived indirectly. Theoretically, ϕ can be applied to facades, room dimensions, and structural layouts. It can be easily found the irrational number and the 5:3 or 8:5 or 4:9 in most buildings[4]. Historically, Renaissance architects like Leonardo da Vinci and Palladio explored ϕ in their work. This investigation examines whether such proportions persist in selected landmarks.For example, the gigantic pyramids of the ancient Egypt, one of the seven wonders in the world, is also followed the golden ratio.It is designed that the Khafre’s pyramid is perfectly corresponds to the right triangle with the side ratios. Artists can change the number, scale, and location of the golden rectangles to generate the new 3-D shapes with more various changes. The two dimensional shapes can be deduced from these golden rectangles to create the composition. 3 Methodology 3.1 Selection of Structures Below are the chosen buildings to be investigated around the world at different stages of time, which are The Parthenon (Greece), The Taj Mahal (India) and the The Guggenheim Museum (USA) The Parthenon is one of the well-known ancient geometric example of the golden spiral and it is also the typical building presenting the Greek temples. Thales and Pythagoras introduce the basic theory of mathematics and geometry, which better promote the application of the Golden section[1]. The Taj Mahal, a mausoleum made of white marble, was constructed from 1631 to 1648. It was constructed because Mughal emperor Shah Jahan in remembrance of his beloved wife. It is a 22 feet high and 313 feet square platform. with corner minarets 137 feet tall and 81 feet high and 58 feet in diameter central inner dome. The main inner chamber is an octagon with 7.3-metre (24 ft) sides, with the design allowing for entry from each face with the main door facing the garden to the south. The Guggenheim Museum in USA is one of the famous architecture applied the Golden Ratio. It is a triangular gallery inside with six floors.And what is intersting is that the its lengthe and the width will increase as the floor goes up. For example, the width increases from 25 feets to 32 feets from the lowest floor to the highest one(wiki). It provides a significant contrast with its surrounding buildings because of the spiral form. with good combination of triangles, circles and squares which correspond to the concept of organic architecture. Table 2 Chosen architectures 3.2 Data Collection After identifying the specific building for study, I began to obtain measurements of salient architectural details (e.g., elevation dimensions, column ratios, and window ratios) from building plans, scaled images, and credible online references. If specific dimensions were not available, approximate measurements were taken using scaling tool software. 3.3 Analysis Method: The analysis method is to collect the database of the building as soon as possible ,so that some measurements can be calculated or deduced. By identifying key dimensions (a and b) for each structure,for example, the length and the width or the heights .Calculate the ratio b/a and compare it to phi . Using the formula below to quantify deviations to the Golden ratio 1.618. This method can be further used to plot the histogram and the pictures using more mathematical tools. 4. Results Below are some Golden Rectangles with the length to width of ϕ using Desmos. Similarly, when designing the three-dimension draft of the building, calculate the size at different scale factor considering the harmony with the the symmetry and geometry would be an efficient way to design. Figure 1 Golden Rectangle generated by Desmos Figure 2 The Golden Spiral and the expand In addition, the Golden Spiral can be created via the formula and the ratio in the Golden Rectangle. The golden spiral can be drawn with squares. By drawing squares with side lengths following the Fibonacci sequence and connecting the adjacent corners of consecutive squares with quarter arcs. When more squares and arcs are added, the golden spiral were shown. 4.1 Parthenon Most of its proportions followed a ratio of 9:4(≈1.44). Its height, from the level of the stylobate to the top of the pediments was 13.72 m.Dimension of the Stylobate of another ancient temple at the same period were found: 30.88 m x 69.50 m,and the axial spacing external columns are 4.29 m fronts (3.68m corners) and 4.29m flanks (3.69m corners).The lower diameter exterior columns are 1.91m(1.95m corners) and the height exterior columns are 10.43m; height entablature are 3.30m. The deviation can be calculated as below. These two pictures show the fitted Golden Spiral and Golden rectangle presented in the building, not only the part and the whole indicates the proportions but also the vertical and horizontal elements combined to present the geometric myth. The width of the frontal columns is 1.9m while the spacing between two columns is 4.3 m, leading to a ratio of 4.3/1.9 = 2.26, which is very close to 9/4[3]. https://www.goldennumber.net/parthenon-phi-golden-ratio/ The modeling of Parthenon are shown below. Some coordinate are picked to model the dimensions of the building not on the scale, which are (0,140),(0,200) (0,278),(253,278),(175,278),then can calculate the length ratio Figure 3 Length ratio modeling of Parthenon Using the spiral equation to model fit in Desmos, by adjusting the parameters, part of the building can be fitted in the the equation r = — 4.7ebθ, and the b= , in which the is about 1.674.The deviation(%) = = 3.4%. Figure 4 Spiral modeling of Parthenon Following draft shows how the technique is used to design the Parthenon by Leonardis[1], showing how the square changed by the sides and create the Golden Section. In his works, y doing the calculation and interpretation, the dimension of the stylobate of 69.503m dinsmoor field and 68.911m in dinsmoor calculation. It is clear that the width of the temple inside is one half to square of the square on the stylobate. Besides, the setbacks on the gable of the room are almost one fourth square of the 1/2 square on the krepidoma. Figure 5 Skteching deducing of the ancient temple
MFIN7034 Problem Set 3 – Risk Analysis Version: 2025/02/25 Due Date: 2025/03/04 23:55:00 UTC+8 This problem set aims to provide some experience applying machine learning methods in risk analysis. The dataset “credit_risk.csv” is available to you on Moodle. Your main task is to establish machine learning models that predict the default label using available information (covariates). A table of variable explanations is provided here: age Age of borrower Age in number of years edu Education level 0: below high school, 1: high school, 2: college, 3: master, 4: above master gender Gender 0: female, 1: male housing Housing ownership 0: not own, 1: own income Income Monthly income-level job_occupation Job type 0: unemployed/temporarily employed, 1: employed, 2: manager/senior worker past_bad_credit Historical default label 0: non-default, 1:default married Marital status 0: unmarried, 1: married default_label Default indicator 0: non-default, 1:default Submission format: .ipynb notebook with runnable code and all the steps shown, and a PDF report. The final report should contain results generated by your program. Simple, presentable, coherent English, clean graphs. Proper visualization and clear interpretations & discussions, such as explaining why a factor can predict default or what your logic is in pursuing higher AUC, will also be graded. 1. Machine Learning Trials (60 Marks) The first part of this problem set contains three practical tasks for machine learning algorithm applications: 1.1 Logistic Model (25 Marks) Run logistic regression: regress default label on available variables. Besides the original variables, also try to add more interaction term variables and/or non-linear transformation variables (polynomials, log transformations, dummy variables, etc.) to the model. Summarize your result. Obtain prediction values in the regression above. Compute and plot the ROC curve. Compute AUC value. Explain your main results. Also compare the AUC performance from different model specifications. Briefly discuss the outcomes. 1.2 SVM/Random Forest (15 Marks) You might wonder whether non-linearity in model specifications can help. Try SVM or Random Forests method. You can select either one. Then, report the key parameters of your model, the AUC value, and the ROC plot as your main result. 1.3 LightGBM (20 Marks) LightGBM has been one of the most popular gradient boosting algorithms since it was developed. This algorithm is very popular on Kaggle and also productive in the real-world production scenarios. Try LightGBM method. Describe the procedure in detail, such as data preprocessing, model specification, feature selection and hyper-parameter tuning. Report the AUC value and plot the ROC curve. Compare this model’s performance with outcomes in the previous two questions. 2. Deeper Explorations (40 Marks) Think deeper, ask further, and explore more: 2.1 Data Preprocessing (15 Marks) Introduce the detailed target for the step-by-step data preprocessing procedures towards Logistic model and LightGBM model respectively. Note that the prodecures should match with your code in Question 1.1 and 1.3. An example answer would be in the following format: For Logitsit model: …: …; Standardization: In order to … …: … For LightGBM model: … 2.2 Feature Importance Analysis (15 Marks) For each model you use in Question 1.1, 1.2 and 1.3, list one model-dependent method to provide feature importance measurements for the feature inputs. Also use the nominated method to output the feature importance ranking for the top 5 features. You will produce a table like (as an example): 1st 2nd 3rd 4th 5th Logistic age edu … … … SVM/Random Forest (the one you used) … … … … … LightGBM age edu … … … 2.3 Go Deeper towards Feature Importance Analysis! (10 Marks) Do you think there could be any method that can apply to all above four models (i.e., Logistic regression, SVM, random forest, LightGBM)? Please discuss your idea and thoughts. The mark of this question will be given very generously, so if your answer is yes, just give a try and show what you can get!
ECON 513 Spring 2025 Problem Set 3 Due 3/6/2025 Thu 11:59pm via Brightspace 1. Consider a linear model: yi = x'iβ + εi , i = 1, · · · , n where xi ∈ R k and E(xiεi) = 0. Suppose we have an valid instrumental variable zi ∈ Rl. Consider the 2SLS estimator βˆ 2SLS = (X0 Z(Z 0 Z) −1Z 0 X) −1X0 Z(Z 0 Z) −1Z 0 y. Assume all regularity conditions hold. (a) Prove that if l = k, i.e., the model is just-identified, the 2SLS estimator simplifies to the IV estimator βˆ IV = (Z' X) −1Z'y. (b) Prove the consistency of 2SLS. (c) Prove the asymptotic normality of 2SLS. 2. Consider a linear regression model with classical measurement error. We want to estimate the linear model: yi = x*'iβ + εi with E(εix*i) = 0 but we do not observe x*i , but only a noisy but unbiased measure of it, xi = x*i + vi where E(vi) = 0, vi ⊥ x*i , vi ⊥ εi , and E(x*i x*'i ) is of full rank. Hence, we can at best estimate a linear model yi = x'iβ + ui (1) (a) Find the probability limit of the least squares estimator for β in (1), as a function of E(x*i x*'i ), E(vivi'), β. (b) Suppose that xi and x*i are both scalar random variables. How does the probability limit in (a) compare to β? (c) Suppose we additionally observe another measure of x ∗ i , wi = x ∗ i + ηi where E(ηi) = 0, ηi ⊥ x*i , ηi ⊥ εi , and ηi ⊥ vi . Can we use wi as a valid instrumental variable for xi in (1)? Explain. 3. Consider the dataset Card1995. We will follow Card (1995) to see the relationship between education attainment and wage. Some of the variables of interest: (a) We will first clean the data set. Generate the following variables Figure 1: Variable Description • exp = age76 - ed76 - 6 • exp2 = exp2/ 100 and drop observations with missing wage (lwage76). (b) Regress lwage76 on ed76, exp, exp2, black, reg76r. What’s the interpretation of the coefficient on ed76? What is the potential endogeneity concern here? (c) Consider the same wage equation as in (a). Now run a 2SLS using nearc4a and nearc4b as an instrumental variable for ed76. (Hint: help ivregress) What story makes college proximity a valid IV for education? (d) Let’s see if nearc4a and nearc4b satisfy conditions for a valid IV. Run the first stage regression (i.e., regress ed76 on nearc4a, nearc4b along with other variables.) What’s the F-statistics of testing if nearc4a and nearc4b are both statistically significant in this first stage regression? Do you think these IVs are strong? (Hint: help test) (e) Now let’s see if the IVs are exogenous. Do an overidentification test. Do you think the IVs are exogenous? (Hint: Run estat overid right after your 2SLS regression.) (f) Let’s do the Durbin-Wu-Hausman test to detect endogeneity. We’ll do this step-by-step. i. Rerun the regression in (a) and store the estimates. For example, you can use the command estimates store ls to store the estimation results under the name ls. ii. Rerun the 2SLS in (b) and store the estimates under the name tsls. iii. Conduct the Hausman test using the command hausman tsls ls. What do you find? Discuss.
Introduction to the Mathematics of Finance. Take-Home Midterm. Due March 14, 2025 11.59 p.m. Please write a pledge that you did not copy solutions from the work of other students. You can consult TAs if you have any difficulties. This midterm uses content of lectures in the last two weeks of February. It also uses base matlab models that can be downloaded from Courseworks. 1.Matlab option model. Download from Courseworks matlab option model files BlackScholesStocks.m and BlackScholesGraph.m and put them in the same directory. BlackScholesStocks.m contains the function that calculates the Black Scholes price for options on non-dividend paying stocks. BlackScholesGraph.m is a script. that makes a graph of option price as a function of stock price. Type at Matlab prompt >BlackScholesGraph and the script. will be executed, and the graph will appear. Now modify the file BlackScholesStocks.m so that the function now calculates the price of options on stocks paying continuous dividends at a rate q. Modify the file BlackScholesGraph.m so that it now plots graph of a call with the same parameters as before but with the dividend yield q=2% and the new strike price 11. Submit printouts of code and graph. 2 Download matlab Brownian motion model from the courseworks. Modify it to Geometric Brownian motion with starting value Xo=100, growth rate μ = 0.14, volatility σ = 0.28 and 5000 trajectories. Check that the code works. Try out 50,000 trajectories. Try out 100,000 trajectories. Submit the code printout and the graph printout for 5000 trajectories. 3. Using arbitrage arguments explain why the price of an American call option on a stock paying no dividends should be the same as the price of a corresponding European call. Why American calls on a nondividend paying stock should not be exercised early. 4. Why when the stock pays dividends the argument of the problem No.3 can not be used. Give a numerical example (choosing x, k, r, T −t, σ) in which it is obvious (without any formulas) that American put price on a nondividend paying stock is larger than the corresponding European put price. 5. (a) The stock price is 40 the volatility of the stock is 20%. Assuming that the time to expiration is 3 months and the interest rate is 1% per annum calculate the price P of the European call option with strike 41. (b) Calculate Δ, Γ, r, Vega using formulas for these parameters. Calculate the same parameters approximately using the options calculator. (c) Check that following relationship holds 6. What are the parameters affecting prices European and American calls and puts. How do the prices change when one of the parameters changes with all the others remaining the same? 7. Suppose that we have three European calls with strikes 60, 65, and 70 and the same maturity 1 month. Their prices are 9.00, 7.00, 4.00. Is it possible to do an arbitrage? 8. Suppose that current stock price is 50 $. Its annualized volatility is 30 % and annualized return 10 % i.e. we assume that the stock price follows dXt = 0.1 Xt dt+0.3 Xt dWt. Write the probability density function for the stock in 1 year. What is the mean and standard deviation of the terminal stock price? (standard deviation of price, not of return)
Department of Electrical and Electronic Engineering EIE2105 Digital and Computer Systems Tutorial 2: Combinational Logic I Q1. Fill in the following truth table: A B C F= A xor (B and C) 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 Q2. It is said that the Boolean functions of the output variables of the circuit shown in Fig. Q2 are given as Prove that the following equation is true: Q3. Derive the output functions of the following circuit.
Exploring the concept of literacy as it relates to real-world educational settings Introductory comments: The aim of this document is to provide you with some guidance on how to write the summative essay for the LLC module. However, please, consider the following as guidance, and not as a prescriptive template. You are able and allowed to work through the requirements differently. There are 3 elements you need to connect in the essay: 1. A concept from the module that relates to literacy: The concepts from the modules are listed on the moodle space, but you will find you have a broad choice: Literacy and literacy practices, Role of literacy in forming identity, Relationship between language and literacy, Literacy and learning, The link between literacy and citizenship, Literacy and politics of language, Literacy and power of language. 2. Real-world educational setting: The definition of an educational setting is equally broad – a place and space where people learn. We may think of locations like schools or museums, but we can also think of learning at home in families, where babies learn to walk and talk, or learning online. The educational setting you choose will therefore link to your understanding of what constitutes learning and how we learn. 3. Educational resource: The educational resource can be anything such as a textbook, a website, a worksheet, a social media platform. or an app. The key will be to explain how your chosen resource is an educational resource that relates to literacy and how it is used in your chosen educational setting. The best starting point is probably to identify what aspect of literacy is of interest to you and which educational resource you can draw on and then consider what educational setting this source can be used in. Sample essay outline Here is an example for how you can structure your essay. Please, remember this is not prescriptive! You may want to swap sections around, or you may have your own very personal way of presenting the content. But if you are unsure, you can use this outline to help you. Introduction Approximately 150 words In this section you should introduce your topic. The easiest way to do that is to highlight your personal experiences and refer to your personal interests. Consider what made you choose your topic, for example. Definition of the Literacy concept Approximately 350 words In this section you should demonstrate your knowledge about literacy as a key concept and what aspect of literacy and relevant concepts you would like to explore. If you have chosen more than one concept, then make sure that your definition section gives similar weight to each of them. Also, this is a section where you will need relevant literature to explain, explore and define your concept(s). Ideally, you will put forward different views from different authors, and then highlight how you personally define and interpret the concept(s). Educational context and educational resource Approximately 350 words This part of the essay deals with your chosen educational context. What kind of educational setting are you referring to in your essay? What do you define as your educational resource? Why did you choose it? In this section, you may need to refer to some literature relating to the educational resource. You may want to include a photograph or screenshot here to explain your resource. Applying the concept Approximately 500 words This is an analytical section where you combine the previous two sections. Your task is here to your chosen concept(s) connect to the concept of literacy and the educational resource can illustrate this. You can do that by showing how the theories/concepts have shaped the educational resource and how its use links literacy to the concepts. Again, you may want to include photographs to exemplify and explain that link you are making. Conclusion Approximately 150 words In this section you should provide a conclusion to your essay. Write about what you were trying to achieve with this essay. For example: what did you show, which concepts did you connect to literacy, which educational resources and educational context did you choose to link to your chosen concept – and why. References Not included in the word count. If you are using sources in a language other than English, you are required to translate the source into English for the bibliography. For example: If you have used this source: König, K., & Oloff, F. (2018). Die Multimodalität alltagspraktischen Erzählens. Zeitschrift für Literaturwissenschaft und Linguistik, 48(2), 277-307. You should reference it as: König, K., & Oloff, F. (2018). Die Multimodalität alltagspraktischen Erzählens. [The multimodality of the everyday narrative]. Zeitschrift für Literaturwissenschaft und Linguistik, 48(2), 277-307. (read in German).
Advanced Concrete Performance ENG5224 2024/2025 Coursework 1 Assigned: Monday 3rd February 2025 Due: Thursday, 3rd March, 13:00, 2025 Submission via Moodle Page Text: Lecture notes, ATENA Tutorial, Exercises Objectives: Understanding of the numerical analysis of the nonlinear behaviour of reinforced concrete structures Weight: 25% of overall grade of Advanced Concrete Performance ENG5224 Adopting a two-dimensional plane stress idealisation, use ATENA2D to analyse the short term behaviour of the reinforced concrete beam. See page 7 for the geometry and properties of the beam. This beam is similar to the one used in the ATENA2D software tutorial. However, the dimensions of the beam depend on your student number. Investigate one failure mode. Which failure mode you investigate depends on the last digit of your student number. If it is odd, then you must investigate failure model 1. If it is even, then you must investigate failure mode 2. For instance, if your student number is 7654321, the last digit is 1, which is odd which means that with this student number you must use failure mode 1. For your failure mode, you should then investigate one set of parameters. You have to adjust the reinforcement area As to obtain your failure mode. If required, you can add shear reinforcement in the form. of vertical stirrups to the beam as well. Failure modes: Failure mode 1: Bending failure of an under-reinforced beam Failure mode 2: Bending failure of an over-reinforced beam If you are not sure how these failure modes are defined, have a look at the information provided at the beginning of the lectures. Set of parameters: 1) l Compressive strength of concrete l Shear retention (part of fixed crack model. See the theory manual of ATENA2D) l Solution scheme (displacement, load and arclength control (check manual for input parameters)) 2) l Fracture energy of concrete l Bond-slip of reinforcement (perfect bond, bond slip). Hint: Make sure that symmetry conditions are enforced for the reinforcement if slip is activated. l Mesh size 3) l Size of the load steps l Tensile strength of concrete l Compressive strength of concrete For these investigations, choose first a base configuration of input parameters. It is suggested that you start with the tutorial input parameters and change, if required, the reinforcement amount and arrangement (you might need to add reinforcement), so that you obtain the failure mode that you want to investigate. Define this then as your base configuration. Next, change only one parameter at a time, so that you can be sure that the change in structural response that you obtain is due to the parameter that you would like to investigate. This sounds easier than it is done. As an example, imagine that you would like to change the compressive strength. In the tutorial, the cube compressive strength is used to calculate all concrete material parameters, including fracture energy, Young’s modulus and tensile strength. By changing this cube compressive strength value, all these parameters would change. This would not be desirable, since it would not be clear that the observed changes are due to the changes of the compressive strength or one of the other parameters. Therefore, it would be better to change only the compressive strength and leave the other parameters unchanged. Also, it is suggested to vary the parameters strongly so that you can see a clear influence on the load displacement curves. For instance, an increase of the fracture energy by 5% compared to the base value will most likely only have a very slight influence on the load-displacement curve. In this case, it would be better to use 50% increase. Obtain the response for at least three different values of a parameter, so that you can observe a trend. Study the influence of the parameters on the global response of your structure in the form. of load-displacement curves. Explain the trends that you observe by looking at local results, such as contour plots of stresses or strains, deformed shapes, crack patterns, etc. Not all of these local results might be meaningful for your failure mode. It very much depends on your parameters and failure mode which of the local results are most suitable to look at. Remember that for reinforced concrete structures at the ultimate limit state, the important parameters are maximum load, displacement at maximum load and ductility. Compare your finite element results for the base configuration only for your failure mode with the capacity calculated for your failure mode according to a Concrete Design Code of your choice (hand calculations). Results of finite element analysis and hand calculation might differ a lot. This is not a problem. However, if possible, understand why this is the case. Is it a shortcoming of the method used in the design code or a limitation of the finite element program? Report Each student has to produce one report (one report per student). The report must be submitted online (Moodle) as a pdf-file. This report should summarise the analyses and include observations and conclusions regarding the influence of the above parameters. Important: The maximum length of the entire report is calculated as 10 pages (inclusive all appendices), Times New Roman, 12pt with at least 2cm margins. The report should contain the following minimum contents: 1) Cover sheet with the name and matriculation. (not part of the 10 page limit) 2) A short description of models and idealisations used to solve it should be provided. Make sure that you provide enough information so that someone else could reproduce your results. The results of the base configuration should be compared to the hand calculation. Description of the analyses carried out. Presentation of the influence of the parameters on the load-displacement curves. Comparison of hand calculations with finite element results (only for the base configuration of the failure mode). Discussion of trends observed in load-displacement curves and presentation of more detailed results, such as deformation patterns and contour plots (if relevant), to explain these trends. The discussion based on local results is very important. 3) Conclusions clearly stating the trends observed. 4) References (if required) Marking The marking will consider in equal weights the presentation of the results and the understanding. Concerning the presentation, aim for the following: · Arguments are well presented · Major findings are clear and easily accessible · The writing is accurate · Good English · All references to figures and equations are correct · Enough information is provided to allow others to reproduce your results Concerning the understanding, provide sufficient discussion to show that you have understood the underlying models/theory, which give you the presented results. Please note that standard penalties for late submission apply: Reduction of two subgrades (e.g. B1 to B3) for each working day late, up to a maximum of 5 days. Zero mark if more than 5 days late. It is essential that you still submit any late coursework, even if you are going to get zero marks for it (otherwise you cannot obtain any credits for this course).
MATH6011 FORECASTING ASSIGNMENT 2025 Your coursework must be submitted electronically via Blackboard by 3pm on Friday March 21st. Any work handed in after this time will be subject to the following penalties: 10% of your marks lost per working day up to 5 working days. Do not write your name anywhere on your work, as marking will be anonymous. Your student ID should be included in the filenames but not your name; see further instructions on file naming in Section 3 below. An extension, for bona fide reasons, may be allowed by prior agreement, but only well before the deadline; you can contact the Student Office if you would like to apply for an extension. Computer crashes or file losses a day or two before the deadline will not be an acceptable reason for an extension. It is therefore advisable to keep back-up copies of your work. Components of the project will receive different weightings in producing your final mark: 50 marks for the exponential smoothing part, 20 for ARIMA, 20 for regression, and 10 for the overall organization of your submitted material, including the description of your codes/files. 1. Background and analysis In light of the United Nations Climate Change Conference (also known as COP29) that took place in Baku, Azerbaijan, 11–22 November 2024, the UK government through its new Global Clean Power Alliance (GCPA) initiative has employed you as a consultant. Your task is to forecast the behaviour of a number of key environmental indicators until December 2025, to help support the decision process for new policies to support the country’s efforts to reduce the impact of climate change. The data is provided by a number of public organizations, including the UK Meteorological Office and the Office for National Statistics. 1.1. How to get the data. From the four weblinks given below, download the data sets and save them in xlsx, xls, or csv format. The resulting files might have multiple columns or sheets; follow the corresponding instructions to access the data necessary for your anal- ysis. Copy the data sets from the required columns as described below; i.e., MSTA, CH4, GMAF, and ET12, scrolling down, where necessary, to find the monthly observations. (A) Global Mean Surface Temperature Anomaly (MSTA) in 。C: https://www.metoffice.gov.uk/hadobs/hadcrut5/data/HadCRUT.5.0.2.0/download.html Download the monthly data in the (Global (NH+SH)/2, CSV) cell of the Summary series table under the HadCRUT.5.0.2.0 analysis section of the webpage. The MSTA data is lo- cated in the Anomaly column of the csv file. (Source: The Meteorological Office (Met Office), which is the UK’s national weather service). (B) Global Monthly Atmospheric Carbon Dioxide Levels (CH4): https://climate.metoffice.cloud/greenhouse gases.html Scroll down towards the end of the webpage (Get the data section), and download the csv file under CH4>NOAA CH4. The CH4 data to use is available in column C of the csv file. (Source: Met Office’s Climate Dashboard). (C) International Passenger Survey, UK visits abroad (GMAF): https://www.ons.gov.uk/peoplepopulationandcommunity/leisureandtourism/datasets/interna tionalpassengersurveytimeseriesspreadsheet GMAF: select the xlsx or csv file; then see the data to be used in the GMAF column (or C) of the spreadsheet, scrolling down to the monthly observations. (Source: UK Office for National Statistics). (D) UK Inland Monthly Energy Consumption (ET12), in million tonnes of oil equivalent. The data can be downloaded by clicking on the expression “Inland energy consumption: primary fuel input basis (ET 1.2 - monthly)” available on this webpage: https://www.gov.uk/government/statistics/total-energy-section-1-energy-trends ET12: use the data in the Unadjusted total column (or B) of the Month worksheet. (Source: Department for Energy Security and Net Zero). 1.2. Tasks. As it so often happens in the real world, the data sets are of different lengths. You will have to use your own judgment in inspecting and preparing the data before carrying out any technical analysis. The analysis is in three parts: (a) You are asked to take all four series separately and to forecast monthly behaviour until December 2025, using exponential smoothing-type forecasting methods. (b) The GCPA team have been satisfied in the past with exponential smoothing-type fore- casting methods and are happy to see these techniques used in the analysis. However, they are interested in the possible use of the ARIMA methodology to predict MSTA. You are asked to fit the ARIMA model to MSTA, for an analysis in which you compare the use of ARIMA forecasting and a suitable exponential smoothing method. You should make a recommendation as to future use of ARIMA on this time series. (c) The GCPA team is interested to know whether global temperatures (that is, series MSTA) are affected by carbon dioxide levels, international air travel, and the consumption of fuels (as exemplified by the time series CH4, GMAF, and ET12, respectively). Develop a multi- ple regression model, use it for the prediction of MSTA until December 2025, and report on whether you think the model is satisfactory or not. 2. What you must produce You must produce a technical report describing all the analysis done to select the most suitable forecasting method, as well as the results obtained. The report must be accompanied by the codes used to perform the technical analysis, as well as key resulting graphs. More details on each of the aspects of the work are given in the next subsections. 2.1. The technical report. The technical report must follow the structure described in Subsection 2.3. It should address the three parts of the analysis: exponential smoothing, ARIMA, and regression. For each part, give details of the preliminary analysis and data preparation. Also describe why each model was chosen/built and explain the analysis carried out, including an evaluation of the effectiveness of the models. 2.2. Python codes and the appendix. You must also prepare and submit Python codes that you use to generate the results that will be included in your technical report. If any preliminary operations on your data are needed before applying/developing a Python code for your analysis, it is fine to include this in the corresponding Excel file containing your data sets. However, you must complete all the main tasks of your analysis using Python. You can use the codes from the course, different ones, or develop your own. Marking on this aspect of your work will not be based on how well you can program in Python, but rather on the functionality of your codes and their relevance in the corresponding analysis. To help us easily know what you do in each code, you must add a single page as Appendix to your technical report, to give a brief one or two sentences description of what it does. If you do any preliminary operations on your data in the Excel file containing your data set, a line or two should also be included to describe this. 2.3. Organizing your technical report. The report must be organized as follows, with the maximum length of 5 pages, including up to 4 pages for the core text (Sections 1 - 3) with all the relevant key illustrative graphs and up to 1 page for the Appendix. The font size and overall style must be chosen in such a way that the report is easy to read. 1. Exponential smoothing (maximal length: 2 pages; total marks: 50) Marks to be attributed based on how well you articulate the following aspects: (a) Describe your data preparation (and its effects) prior to the implementa- tion of exponential smoothing methods. (b) Describe the preliminary analysis undertaken (and conclusions drawn) prior to the implementation of exponential smoothing methods. (c) Give details of how exponential smoothing models were selected for each time series, and how effective/accurate these methods are at forecasting. (d) Quality and suitability of graphical illustrations for analysis and results. (e) Clarity and quality of presentation. (f) Functionality of Python codes. 2. ARIMA forecasting (maximal length: 1 page; total marks: 20) Marks to be attributed based on how well you articulate the following aspects: (a) Describe any data preparation prior to ARIMA, and its effects. (b) Describe any preliminary analysis undertaken prior to ARIMA modelling, and the conclusions drawn. (c) Give details of how the ARIMA model was selected, tested, and its effec- tiveness/accuracy evaluated. (d) Compare ARIMA and exponential smoothing forecasting, both in general terms and in the particular instance of K54D. (e) Quality and suitability of graphical illustrations for analysis and results. (f) Clarity and quality of presentation. (g) Functionality of Python codes. 3. Regression prediction (maximal length: 1 page; total marks: 20) Marks to be attributed based on how well you articulate the following aspects: (a) Describe any data preparation prior to implementing the regression. (b) Describe any preliminary analysis undertaken prior to regression modelling and the conclusions drawn. (c) Give details of how a regression model has been selected and comment on its suitability for forecasting the variable FTSE. (d) Quality and suitability of graphical illustrations for analysis and results. (e) Clarity and quality of presentation. (f) Functionality of Python codes. Appendix: Code descriptions, etc. (maximal length: 1 page; full marks: 10) Marks here will be attributed based on the overall organization of the material that you submit, and on how clear, informative, and concise is your description of what each of your Python codes (or Excel file, in case any preliminary operations is carried out there) does. In summary, the following guidelines must be followed while producing the technical report: • The technical report must be organized as described above, with maximum 5 pages in total (4 pages maximum for Sections 1 - 3, and up to 1 page for the Appendix). • No theory of forecasting is required, or repeat of the material from lectures, unless you have used models or concepts not included in the notes. • Formal English language should be used, avoiding abbreviations (such as “doesn’t”, for example), slang, and casual vocabulary. • In Sections 1, 2, and 3, references to codes developed/used for specific tasks can be made by using the corresponding code’s name. But no other details or description of Python modeling are needed in those sections. • At most two sentences are needed in the Appendix to explain what each Python code (or Excel file, if necessary) does. • Feel free to include subsections to Sections 1, 2, 3, and the Appendix, if they seem necessary to help make some parts of the report clearer. • No introduction, table of contents, or conclusions should be written for the report. 3. Submission All submissions should be made under the corresponding assignment tab in Blackboard. Submit one zipped folder ( . zip), not an archived file ( . rar), without internal folders, which contains a pdf copy of the technical report, one spreadsheet with the data sets to be used for the analysis. You should also include an adequate number of files with your Python codes. Remember not to put your name anywhere on your work, as marking is anonymous. Include your student ID in your technical report and use the following naming pattern for all the files to be submitted via Blackboard (under the Assignments tab): • One pdf file with the technical report: TechnicalReport StudentID. pdf. • One single data file: Data StudentID . xls (with a separate sheet for each data set; namely, MSTA, CH4, GMAF, and ET12). • Python codes: each file name should have three components, with first one related to the corresponding methodology, second to the specific task, and the third being the student’s ID. For example, if you produce/use a code to illustrate something related to the exponential smoothing, ARIMA, or regression methods, you could respectively apply the following naming patterns to your files: – ExponentialSmoothing CH4TimePlot StudentID. py – ARIMA ACFPlot StudentID. py – Regression Correlation StudentID. py The middle terms CH4TimePlot, ACFPlot, and Correlation are related to specific tasks that could be carried out under the corresponding parts. The expression used for the middle term should not exceed fifteen characters.
Algorithms and Data Structures I Project 2: LinkedDS Background This project is designed to increase your experience with linked data structures. Similar to Project 1, you will work with control structures, class-building, interfaces, and generics to create a new linked data structure called a Linked DS. The Linked DS will implement the SequenceInterface, so before doing anything else, take a look at the method comments in SequenceInterface .java. After you finish reviewing SequenceInterface.java, take a look at the client code in Project2.java. Just like Project 1, you cannot modify any of the code in SequenceInterface.java or Project2.java. As with most complicated programming assignments, there are multiple ways to implement the SequenceInterface methods, some more efficient than others. The comments in SequenceInterface.java outline the running time (efficiency) requirements for your implementations. I 100% recommend pencil & paper work and drawing pictures to familiarize yourself with the algorithm you’d like to implement before you write any code. Note: The primary data structure in your Linked DS must be a one-dimensional array of linked lists. You may not use any predefined Java collection class (e.g. ArrayList) for your Linked DS data fields. You may not declare any one-dimensional arrays except for the alphabet and for the return value of any methods that return an array (declaring a one-dimensional array and then resizing it before returning is OK). LinkedDS Your Linked DS class header must be: public class Linked DS implements SequenceInterface { You must use the following instance variables inside the Linked DS class: private Node [] array; // 1-D array of linked lists private int size ; // the number of items in the sequence private T [] alphabet ; // the possible item values (e .g ., the decimal digits) private T firstItem; //the first item private T lastItem; //the last item The Node private inner class is already defined in Linked DS.java:: private static class Node { private int item; //index in alphabet of item private Node next ; private Node(int item){ this .item = item; next = null ; } } Besides the methods in SequenceInterface, the following constructor is required: public Linked DS(T [] alphabet) Here’s an example of how the Linked DS should work using a 1D array of linked lists. The alphabet for decimal digits is the same as it was in the example for Project 1: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. Remember, your Linked DS must work on any generic type, not just decimal integers. Just like Project 1, let’s use the sequence 9875732732 as an example. This diagram shows how it would look in a Linked DS: Each of the ten digits of the alphabet is represented by an array of Node objects. Each entry in the array is a Node that serves as the head of a linked list that contains the successors of an item in the sequence, stored by their position in the sequence in ascending order. In the example, the first two lists are empty because 0 and 1 are not in the example sequence (9875732732). The Node at index 2 (which represents the decimal digit 2) has only one other node in its chain: a 7, meaning that the item at index 7 is what follows 2 in the sequence. There’s only one item in 2’s linked list because 2 is only succeeded by another number once in the sequence. The Node at index 7 (which represents the decimal digit 7) has three nodes because 7, in its three occurrences in 9875732732, is followed by 5, 3, and 3, in that order. Every pair of consecutive items in the sequence results in a node in the diagram. The sequence 9875732732 has the following properties: first() == 9 last() == 2 size() == 10 isEmpty() == false getFrequencyOf(3) == 2 getFrequencyOf(2) == 2 getFrequencyOf(7, 3) == 2 //73 appeared twice in the sequence successors(7) == {5, 3} //5 and 3 are the unique items that immediately follow 7 in the sequence If we start with the example sequence 9875732732 and then call push(7), 7 will be inserted to the beginning of the sequence, it will become 79875732732, and this diagram shows the whole model: Note that a Node containing 9 was inserted into the beginning of 7’s linked list. This is because 9 is now the first item that comes after a 7 in the sequence. The sequence 79875732732 has the following properties: first() == 7 last() == 2 size() == 11 isEmpty() == false getFrequencyOf(3) == 2 getFrequencyOf(2) == 2 getFrequencyOf(7, 3) == 2 //73 appeared twice in the sequence successors(7) == {9, 5, 3} //9, 5, and 3 are the unique items that immediately follow 7 in the sequence After you have finished implementing Linked DS, the Project2.java test file should compile and run correctly and give the same output as shown in P2Out.txt. Just like Project 1, I strongly recommend that you stub out the code in Project2.java and test your methods incrementally, instead of all at once at the end. Starter Code All starter code and example output is available for download from this folder: Project 2 Starter Code Deliverables You are responsible for submitting: - Linked DS .java All other files will be automatically supplied to Gradescope. I would suggest avoiding making code changes to any of the other files (besides stubbing out Project2.java to test parts of your implementations at a time as you work). If you use Eclipse or any other Java IDEs to work on this project, remember to test on a command line before submitting. Sometimes editors add package lines to the top of .java files that will break the autograder. Hints - To create an array of generic types (T) of size length, use this: T [] result = (T [])new Object [length] ; - Adding @SuppressWarnings("unchecked") above that line will prevent your compiler from warning you about the unchecked cast. - Unlike our linked implementations in class, the Nodes here always contain ints as data. This is because the ints represent the index of the actual T item in the alphabet array. - See file P2Out.txt to see how your output should look. As noted, your output when running Project2.java should be identical to this. - Draw pictures! Drawing a picture of a linked chain will be much more helpful when debugging and deciding on an efficient way to implement the different methods. If you have access to a whiteboard, use it! If not, pencil & paper or a tablet will also work fine.
MA585 Homework 5 1. Sample mean X = 0:157 was computed from a sample of size 100 generated from a MA(1) process with mean μ and θ = -0.6, σ2 = 1. Construct an approximate 95% CI for μ. Are the data compatible with the hypothesis that μ = 0? 2. Suppose you have a sample of size 100 and obtained (1) = 0:432 and (2) = 0:145: Assuming that data was generated from a MA(1) process, construct a 95% CI for ρ(1) and ρ(2): Based on these two conÖdence intervals, is the data consistent with a MA(1) model with θ = 0:6? 3. For each one of the following ARMA processes, choose parameters such that the process is causal and invertible. In each case, use the arima.sim function in R to generate a sample realization of size 100: Generate a time series plot of the simulated series, and in each case plot both population and sample ACF and PACF. (i) AR(2) (ii) ARMA(1,1) (iii) MA(1) (iv) ARMA(1,2) R Note: acf and pacf functions in R can be used to obtain sample acf and pacf. The following code was used to generate the plots for AR(2) example in lecture note 4. x=arima.sim(n=100, list(ar=c(1.5,-0.75))) plot.ts(x) title(main="Simulated Data from the AR(2) Process X(t)-1.5X(t-1)+0.75X(t-2)=e(t)") par(mfrow=c(2,2)) y = ARMAacf(ar=c(1.5,-0.75),lag.max = 20) y = y[2:21] plot(y, x = 1:20, type = "h", ylim = c(-1,1), xlab = "h", ylab = "Autocorrelation", main = "AR(2) Population ACF") abline(h = 0) y = ARMAacf(ar=c(1.5,-0.75),lag.max = 20,pacf=T) plot(y, x = 1:20, type = "h", ylim = c(-1,1), xlab = "h", ylab = "Partial Autocorrelation", main = "AR(2) Population PACF") abline(h = 0) acf(x,main="Sample ACF", ylim = c(-1,1)) pacf(x,main="Sample PACF", ylim = c(-1,1)) For general ARMA processes, you modify the list of parameters as ar=c(..,..), ma=c(..,...). For example, to simulate a sample realization from an ARMA(2,1) process, use >arima.sim(n=100,list(ar=c(1.5,-0.75), ma=c(0.3))) # n=100 observations from X(t)-1.5X(t-1)+0.75X(t-2)=e(t)+0.3e(t-1). Also, the following code was used to generate the MA(2) example. Note how the option ci.type="ma" a§ects the conÖdence bounds for sample acf. par(mfrow=c(3,1)) y=arima.sim(100, model=list(ma=c(-1.5,0.75))) plot.ts(y,main="Sample Realization from a MA(2) Process") acf(y,xlim=c(1,20), ylim=c(-0.6,0.6),xaxp=c(0,20,10), main="Sample ACF",ci.type="ma") pacf(y,xaxp=c(0,20,10), ylim=c(-0.6,0.6), main="sample PACF") 4. The graphs below show the sample ACF and PACF of three time series of length 100 each. On the basis of the available information, choose an ARMA model for the data. You need to identify the order of the model and, if possible, provide approximate values of the ARMA parameters φ and θ: Justify your answer. a. b. c. d. 5. For a series of length 169, we find that (1) = 0:41; (2) = 0:32; (3) = 0:26; (4) = 0:21;and (5) = 0:16:What ARMA model fits this pattern of autocorrelations? Justify your answer. 6. A stationary time series of length 121 produced sample partial autocor-relation of φ11 = 0:8, φ22 = -0.6,φ33 = 0.08, and φ44 = 0.00. Based on this information alone, what model would we tentatively specify for the series? 7. Consider the annual sunspots data in R given in data file sunspot.year (yearly numbers of sunspots from 1700 to 1988). If you don't know what sunspots are, check on the internet. (i) Plot the time series and describe the features of the data. (ii) Generate a new time series by transforming the data as newsunspot=sqrt(sunspot.year). Plot the new time series. Why is the square-root transformation necessary? (ii) Plot ACF and PACF of the transformed data. Based on these plots, propose a plausible model and justify your answer.
21-259: Calculus in Three Dimensions Lecture #6 Spring 2025 Arc Length and Curvature Definition: The arc length L of the curveC defined by r (t) = 〈f (t), g (t),h(t)〉 for a ≤ t ≤ b is given by Example 1. Find the length of the arc of the circular helix r (t) = 〈cos t,sint,t〉 from the point (1,0,0) to the point (1,0,2π). Example 2. Find the length of the arc of r (t) = 〈√2t,e t ,e −t〉 for 0 ≤ t ≤ 1. A space curve C can be represented with multiple parametrizations. For example, the two vector functions represent the same space curve, via the substitution t = e u . It is often useful to parametrize with respect to arc length as arc length arises naturally from the shape of the curve and is independent of coordinate system. This can be accomplished by use of the arc length function for r (t) for a ≤ t ≤ b: Then we can usually solve for the parameter t as a function of s: t = t(s) and rewrite r (t) = r (t(s)). This way, r (t(1)) is the position vector of the point that is one unit of length along the curve from its starting point. Example 3. Reparametrize the helix r (t) = 〈cos t,sin t,t〉 with respect to arc length measured from (1,0,0) in the direction of increasing t. Find the position vector of the point on the curve that is 1 unit of length from the initial point. Example 4. Reparametrize the curve r (t) = 〈t +3,2t −4,2t〉 with respect to arc length measured from the point where t = 3 in the direction of increasing t. Definition: The curvature κ of a curve C is It is a measure of how quickly the curve changes direction, and it is the magnitude of the rate of change of the unit tangent vector with respect to arc length. Example 5. Find the curvature of a circle with radius a. Example 6. Find the curvature of the vector function r (t) = 〈t2 ,sint − t cos t,cos t + t sint〉, (t > 0). Theorem: Another formula for computing the curvature is Example 7. Find the curvature of the vector function r (t) = 〈t,t,1+ t2〉. Theorem: If y = f (x) is a plane curve (i.e. a curve in R2), then the curvature as a function of x is Example 8. What is the curvature of f (x) = ex? Example 9. What is the curvature of f (x) = sin2x? Definition: The principal unit normal vector N(t) is the unit vector in the direction of the derivative of the unit tangent vector : The unit vector B(t) = T (t)×N(t) is the binormal vector. The plane determined by N and B at a point P on a curve C is called the normal plane of C at P. The plane determined by T and N is called the osculating plane of C at P. The circle that lies in the osculating plane of C at P, has the same tangent as C at P, and lies on the concave side of C (in the direction of N) is the osculating circle of C at P, and it has a radius of ρ = 1/κ. Example 10. Find the equations of the normal plane and the osculating plane of the curve r (t) =〈t,t 2 ,t 3〉 at the point (1,1,1).
Department of Electrical and Electronic Engineering Power Networks EEEE3002 Definition of Technical Specs for Lab #5 This week’s simulation lab demonstrates the principle of load flow analysis for a power network. Up till now load flow has been applied to simple two port systems. In this coursework an interconnected three bus system is studied where each bus type is considered. This demonstrates how the known and unknown variables are typically distributed around the network. Using the MATLAB Simulink software build the ideal 50 Hz network as shown in Figure 5.1 with the personalised set of data in Tables 5.1 and 5.2 (base VA of 100 MVA and a base voltage of 138 kV) which will be sent to you by email after the lab. Then, using the load flow calculator, provide the unknown values indicated by question marks in the tables 5.1. From the results give the real and reactive power flow Use the template provided for your answers. Figure 5.1 Example problem Table 5.1 Example system bus data (base VA of 100 MVA and a base voltage of 138 kV) Table 5.2 Transmission line data for the three bus system (base VA of 100 MVA and a base voltage of 138 kV)
Computer graphics 1. A CCD camera chip of dimensions 10 x 10 mm, and having 2048 x 2048 elements, is focused on a square, flat area, located 0.5 m away. How many line pairs per mm will this camera be able to resolve? The camera is equipped with a 35-mm lens. (Hint: Model the imaging process with the focal length of the camera lens substituting for the focal length of the eye.) Draw a diagram to show how you model this question and write down the detailed steps of solving it. 2. The following shows a perspective projection where the eye is at the origin, the viewing direction is the opposite of the z-axis, and the projection plane is z=-1. (1) A point at (x,y,z) in the viewing frustum is projected on (x’,y’,z’). Give the formula to form. x’, y’, z’. x’= y’= z’= (2) Give the 4*4 matrix that represents the projection. 3. Given an image of 2x2, enlarge it into a 5x5 one and fill in the new image with pixel values using 1) Nearest Neighbor Interpolation 2) Bilinear Interpolation [Detailed steps must be given.] 7 5 3 1 1) 2) 4. Given an image of 9x9, reduce it into a 4x4 one and fill in the new image with pixel values using Fractional Linear Reduction. [Detailed steps must be given.] 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 6 6 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 4 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 5. The following sub-problems about affine transformation is based on the following assumptions: a. The output image employs the same coordinate system as the input image. b. The dimensions of the output image are identical to those of the input image (pixels that fall outside the image boundaries are not displayed). Please write the expression for (x', y'), the coordinates of (x, y) after transformation, based on the given conditions; add the necessary auxiliary lines and labels to the provided image, and give a detailed derivation process. Rotation 1.2 Translation Shear (horizontal) Shear (vertical) 6. Given a 3 × 3 image, use bilinear interpolation to enlarge it to a 4 × 6 image. [Calculate by hand. Detailed steps must be given.] 142 87 213 56 199 34 178 22 245
Digital Banking and Fintech N1632 2024/2025 Media- Video Introduction This video presentation is designed to assist you in writing your final project at the end of this term. This presentation can serve as the first step towards gaining a practical understanding of the banking industry. If you aspire to work for a bank that is expanding and utilizes the latest financial databases like Bloomberg, this project will be invaluable. Whether you have an upcoming interview with a potential employer or plan to pursue a career in FinTech and Banking, this experience will broaden your horizons and help you identify job opportunities and types of banking that align with your interests and skillsets. This, in turn, will enable you to build a compelling candidate profile as you continue your studies and embark on your job search. Requirements for the Video Project This project is an individual assignment, designed to prepare you for future teamwork in a real-world working environment. The video project, constituting 30% of your final grade, will be based on three tasks linked to each part of this module. The video should have a duration of approximately 5-8 minutes. Attend Lectures and Seminars: These sessions will provide crucial details about each task and the expected outcomes. Active participation is essential to ensure a comprehensive understanding of the material. Please do not hesitate to ask clarifying questions during these sessions. Originality: This video must be your own original work. Maintain your unique voice and face perspective throughout the script. and presentation. You are required to submit all supporting files, including data from Bloomberg and relevant figures for your chosen bank and its competitors, generated using Bloomberg's built-in functions. You must also submit the video script. which includes all reference at the end. By Week 3, each student must select a different bank for their project. This bank selection must be approved by the lecturer. Video content. There are main parts (30/35/35). Students must address the following problems in the context of this video: Define the bank and its competitors: • Start with a brief overview of the bank, including its size, location, and target market • Identify the bank's main competitors and briefly compare their business models, financial performance, and competitive advantages by using Bloomberg/Orbis functions • You use visuals like charts or tables to present this information concisely and effectively (for instance, market value- Bloomberg function GP) Trend analysis of key components of the Balance Sheet: • Focus on key metrics that reflect the bank's financial position, such as total assets, liabilities, equity, loans, deposits, using critical thinking to explain impact factors and Blomberg resources • Use charts or graphs to show trends in these metrics overtime and highlight any significant changes or areas of concern by using Bloomberg figures built • • Be sure to explain the reasons behind the trends you observe and link them to broader economic or industry factors or legal frameworks. Trend analysis of income statement: • Analyse the bank's revenue, expenses, and profitability overtime. • Identify key drivers of revenue growth, such as net interest income, fee income, and trading income. • Explain how changes in expenses, such as provisions for loan losses or administrative costs, have impacted profitability. • Analyse the potential impact of these regulations on the bank's future profitability A video creation Creating your video submissions via two main ways: i).Using Adobe Spark Video to create animation style videos ii).Using Zoom to create screencast presentations. The video can also contain PowerPoint slides and be delivered as an oral presentation and recorded. After consulting with the lecturer, it is possible to use other presentation software to complete the presentation. You can use one or more of the video-making tools that have been presented to you (ScreenCastify, AdobeSpark, VideoScribe), or you can also use a more complex one (Adobe creative cloud). The sophistication of the tool you use is one of the assessment criteria. If you only use your mobile phone to upload a talking-head video, you are unlikely to achieve the highest passing grade. It is recommended that you do a trial presentation in advance to check and see if you can deliver the display within the time limit allocated (max 5 minutes). The presentation should include the bank name and the student number. Useful tips Can two students research on the same bank? -No. Therefore, pick the bank you want to work on as soon as possible and let me know when you are sure of it. I will update the list of banks/companies that have been chosen, so please remember to check whether there is any choice clash. If somebody else has announced the company you plan to work on ahead of you, you will need to change your target bank. Attend lectures and seminars: These sessions will provide essential details about each task and expected outcomes. Actively participate to ensure a comprehensive understanding and ask clarifying questions if needed. Choose wisely: Select a bank operating in the designated country of origin (e.g., UK). Crucial: Provide documented proof of registration from the relevant authority (e.g., Companies House for UK banks). Ensure your chosen bank remains the same throughout this project and blog project, as lecturer approval is required. Captivate with narrative: Craft a compelling video presentation (5-8 minutes) that weaves a strategic narrative and showcases your storytelling skills. Remember, dry facts and figures alone won't suffice. Engage your audience with a captivating story that brings financial concepts to life. Offer tips on speaking clearly, maintaining good pacing, and effectively conveying your points through voice and tone. Student participation during the seminar is important to finish this task and it is the considerable factor for final mark (seminar active participation, submission to Canvas) IV. Assessment Criteria Assessment Criteria Weighting 1 Start with a brief overview of the bank, including its size, location, and target market. Identify the bank's main competitors and briefly compare their business models (using information from annual reports, Bloomberg, etc.). Market Share Charts: Visualize market share for the chose bank and its competitors to define key competitor and market value (GP function) 30% 2 Focus on key metrics that reflect the bank's financial position, such as total assets, liabilities, equity, loans, deposits, etc., and identify connections between these metrics. Conduct comparative analysis: This analysis must be conducted in comparison to a selected key competitor. Data Visualization: Utilize charts or graphs to effectively visualize trends in these key metrics over the chosen 5-year period. • Highlight: Emphasize any significant changes or areas of concern observed in the data. • Bloomberg Integration: Leverage the built-in functions within the Bloomberg Terminal to extract and analyze the necessary financial data. Trend Analysis and Explanation: • Explain the reasons behind the observed trends. • Link these trends to broader economic or industry factors, such as Brexit, Covid-19, interest rate changes, and technological advancements. Apply Course Knowledge: Integrate knowledge gained from seminars, lectures, and lab sessions into the analysis. This may involve applying economic theories, financial concepts, or analytical frameworks learned during the course. 30% 3 Analyze the bank's revenue, expenses, and profitability overtime. Identify key drivers of revenue growth, such as net interest income, fee income, and trading income. Explain how changes in expenses, such as provisions for loan losses or administrative costs, have impacted profitability. • Bloomberg Integration: Utilize the built-in functions within the Bloomberg Terminal to extract and analyze the necessary financial data. Apply Course Knowledge: Integrate knowledge gained from seminars, lectures, and lab sessions into the analysis. This may involve applying economic theories, financial concepts, or analytical frameworks learned during the course. Comparative Analysis: This analysis must be conducted in comparison to a selected competitor. Timeframe. The period of analysis is 5 years. Regulatory Impact: Analyze the potential impact of relevant regulations on the bank's future profitability. 30% 5 Presentation Skills Clarity of Speech: • Can people hear you clearly and easily understand your pronunciation? Time Management: • Did you stay within the allotted time limits for the presentation? Audience Engagement: • Did you maintain sufficient eye contact with the audience throughout the presentation? Slide Design and Organization: • Are your slides visually appealing and easy to read? • Do they present only the most relevant information in a clear and organized manner? Storytelling and Persuasion: • Is your presentation engaging and easy to follow? • Are you able to effectively convey yourmessage and persuade the audience? Data Source Attribution: • For all tables and figures: o Provide proper citations for all external data sources. 10%
N1551: Advanced Management Accounting Individual coursework Weight: 40% of total marks. Word count: 2,000 words excluding references and appendices. Advanced Management Accounting considers planning, control and decision making at a strategic, company-wide level. Performance is measured and managed using financial and non-financial metrics. The purpose of this assignment is for students to demonstrate an ability to critically engage with academic literature that covers strategic performance management and consider how the same is applied in contemporary practice. In addition to the above, this assignment requires students to critically consider if the same example from contemporary practice above meets Managerial and Professional accountability as defined by Sinclair (1995). Requirement: Write an essay which critically examines a recent, publicly available report detailing financial and non-financial performance measures referred to in lectures 1, 4 and 5. The report can be for a company or not-for-profit organisation. Particular attention should be paid to how the measurement approach fits in with the organisation’s strategy and if it succeeds in holding organisation to account. Your essay must critically consider in detail the relevant literature on strategy, accountability and financial/non-financial performance measures. A principle aim of this work is to demonstrate your understanding of the concepts through discussion of the literature and applying that understanding to discussion of real-world information. Wider reading will be rewarded, but the main focus should be on the module content. Guidance notes: 1. Very broadly, the ideal essay approach would be to discuss the literature on strategy, performance measurement and accountability, then consider how this aligns with the report you have identified as an contemporary practice. ESG factors are not explicitly required but are likely to feature prominently in your work. 2. You are advised to thoroughly read, draw on, and refer to relevant articles related to strategy, performance measurement and accountability. These have been discussed in Week 1 (Strategy & Management Accounting), Week 4 (Strategic Cost Accounting & the BSC) and Week 5 (Accountability 1). A selection of articles is available on Canvas. The relevant Drury chapters are Chapter 21, 22, and 23. 3. Your selected company report should be recent (less than five years old) and publicly available. Most annual reports of large companies will discuss the company’s performance measures in great detail, but it does not necessarily have to be from an annual report. Any public facing document will suffice. A URL to the original source is required. 4. A key part of your essay should be the discussion of strategy using the concepts discussed in the lectures. You need to examine and explicitly articulate whether the metrics fit the company’s strategy and why. 5. Ideally, your essay should refer to industry-specific characteristics, and you are encouraged to support some of your arguments with evidence from the organisation’s competitors/peers. 6. Any report that demonstrates how the organisation uses a collection of key performance indicators (or metrics), some of which should be non-financial, is allowed. 7. It will not be sufficient to simply list the elements of the report and describe them without any personal value judgement. We will need you to critically engage with the selected report, and discuss its advantages and disadvantages, acknowledging previously made arguments from the literature. 8. Your essay should ideally be structured as follows: o Introduction o Main body: § Introduce the report and organisation of your choice. § Drawing on literature discussing strategy theory, explain how the measures help the development and implementation of the strategy of the organisation. § Explain aspects of the industry sector of the company that have helped inform. the development of the metrics being used. § Drawing on literature, explain how the metrics discussed meets managerial and professional accountability as defined by Sinclair (1995) § With clear use of strategic management accounting theory (eg BSC), explain potential issues with the measures and discuss how these might be overcome. o Conclusion o A list of references you have used. [NOTE: Generative AI tools can be used in an assistive role: You are permitted to use generative AI tools in your preparation for this work, but your submission must be your own words] Using generative AI in your assessments : Writing and assessments : Skills Hub : University of Sussex Learning outcomes to be assessed 1. Demonstrate a knowledge of management accounting systems and advanced techniques in their broader context. 2. Demonstrate an ability to critically evaluate the effectiveness of management accounting techniques and systems. 3. Evaluate and apply contemporary theory and evidence to management practice. 4. Understand the importance of both ethical and methodological assumptions for framing accounting knowledge and practice. Please see the next page for frequently asked questions about the coursework. Frequently Asked Questions About the Coursework Q. Can I use the same report as my friend or another fellow student? A. No, we expect all students to come up with their own example. You should not use an example that you know is used by one of your fellow students. Q. It says in the brief that we can use reports from other organisations in the same industry to illustrate some points. Do these have to be unique as well? A. No these do not have to be unique. Q. What companies do you recommend I look for instead? A. We would suggest looking at companies that are smaller, but still publicly listed, for example those listed on FTSE 250, AIM or NASDAQ. The company needs to manage operations, have customers, and employ staff, otherwise a BSC or similar would not be suitable. UK companies are required to include a strategic report with their annual report, and this should probably be your first port of call. Q. I've found a company and they have what looks like a BSC but is not actually called a BSC. A. Yes this is fine. There are different names for the BSC. Look out for terms such as 'dashboard', 'strategy map', 'strategy prism', 'strategic performance framework', ‘kpis’ etc. As long as the company has a set of grouped non-financial performance indicators you should be fine. Q. I think I found a good report but I'm still not sure. A. Please feel free to check it with the course convenor. Q. The BSC I have found has more than 4 perspectives, and they don't quite look like the ones that were discussed in class. A. Yes this is entirely expected because companies tailor their BSC to their strategies and their challenges, which are potentially industry-specific. The thinking around difference elements of the BSC has also evolved over time. These points should form. the basis of your essay. Q. Do I need to contact the company to obtain extra information? No, this is not required or expected, and will not lead to extra marks. All information should be available to the general public. Q. Is there a 10% leeway with respect to the word count? Yes this is School policy. However, please stay as close to the 2,000 words as you possibly can. Respecting the word limit provides evidence that you are able to succinctly convey your arguments without being overly wordy. Q. What referencing style. should I use? Please use the Harvard style. (Author Date). The Sussex Skills Hub explains it in detail. Q. Are tables included in the word count? Yes tables are included in the word count. Please present them in-line in your essay, not at the end. Q. Are my cover page, references, and appendices included in the word count? No, your cover page, your references and your appendices (if any) are all excluded from the word count.