Math 3544 Probability and Math Statistics Mid-Term Project 2nd Report: Artistic Style Image Classification Spring 2025 Fall 2024 (at most 7 people per group) [Instructions: Replace text in red with appropriate information and turn it in in class on the due date. Keep everything else as is. The write-up of the report and future reports should be typeset well.] PROJECT GOAL AND SCOPE Traditional image classification models like CNNs perform. well on real-world photographs but struggle when applied to artworks. Paintings are inherently subjective and often lack the rigid structure of photographic data. They contain abstract elements, diverse brushwork, color irregularities, and symbolic components that standard classification models may misinterpret. The challenge lies in developing a model that can learn these abstract representations without overfitting to low-level patterns. We aim to develop a reliable and interpretable image classification system that automatically categorizes artworks into different artistic styles. These include Abstract Expressionism, Analytical Cubism, Action Painting, and Art Nouveau Modern. The classification should remain robust under visual ambiguity, style. overlap, and limited labeled data. We propose a comparative framework combining both convolution-based and attention-based deep learning methods. Specifically, we apply classical CNN architectures (ResNet50, EfficientNetB0) alongside Vision Transformer (ViT), a self-attention-based approach. Models are trained and evaluated on a curated subset of the WikiArt dataset. Evaluation metrics include classification accuracy, macro F1-score, and confusion matrix. Furthermore, we implement PCA to analyze learned features and Grad-CAM to visualize attention mechanisms. BACKGROUND/LIT REVIEW Image classification has been widely studied using deep neural networks. Krizhevsky et al. (2012) introduced AlexNet, which sparked the widespread use of CNNs. CNNs rely on convolutional filters to extract spatial hierarchies of features, which has proven effective in structured datasets like ImageNet. ResNet introduced residual connections, reducing the vanishing gradient problem and allowing deeper networks. EfficientNet, proposed by Tan and Le (2019), scales width, depth, and resolution using a compound coefficient and achieves top-1 accuracy of 85.5% on ImageNet with significantly fewer parameters than ResNet. ViT, introduced by Dosovitskiy et al. (2020), replaces convolution with self-attention, treating an image as a sequence of patches. While ViT shows excellent results, with an 84.2% top-1 accuracy with ViT-L/16, it requires a large amount of data for training and is generalized differently from CNNs. In the context of fine art classification, He et al. (2023) combined contextual embeddings and visual features, reporting F1-scores above 0.80 using hybrid attention-CNN architectures. This supports the hypothesis that self-attention mechanisms can be effective in tasks involving abstract, symbolic images. Our study builds upon these approaches to determine which architecture—convolutional or transformer—performs better for fine art classification under data constraints. CASE STUDY In this study, we draw upon Dosovitskiy et al. (2020) as a key methodological foundation due to the Vision Transformer’s ability to model long-range dependencies in image data, which is particularly well-suited for capturing the abstract and stylistic nuances present in artwork. Their approach is not only state-of-the-art in image classification tasks but also aligns with the broader goals of this course in terms of exploring emerging architectures for real-world applications. The dataset used in our project is a carefully selected and preprocessed subset of the WikiArt collection, which includes 500 paintings evenly distributed across four artistic styles. All images were resized to 224x224 pixels and normalized. Labels were encoded using one-hot encoding, and the data was split into training and validation sets. Preliminary visualizations using PCA confirmed a degree of separability among the styles, with clusters forming in reduced-dimensional space. These visual groupings suggest that the model detects stylistic differences even at the feature level. Our primary goal is to evaluate the comparative effectiveness of three deep learning architectures—ResNet50, EfficientNetB0, and ViT—on this task. In addition to these deep models, we employ logistic regression and random forest classifiers as baselines using bottleneck features. All models are evaluated using accuracy and F1-score, with a special emphasis on F1 due to the artistic domain’s sensitivity to false negatives, which could lead to the misrepresentation of styles. Throughout the study, we utilize interpretability tools such as Grad-CAM to visualize what parts of each image the model attends to during classification. We also plot confusion matrices to understand better which classes are often confused. In the event of underperformance, we incorporate augmentation techniques and learning rate adjustments to improve results and consider ensemble strategies combining CNN and Transformer outputs. This methodologically diverse setup ensures that our project rigorously explores the advantages and limitations of both traditional convolutional and modern transformer-based vision models. DATA In this study, we draw upon Dosovitskiy et al. (2020) as a key methodological foundation due to the Vision Transformer’s ability to model long-range dependencies in image data, which is particularly well-suited for capturing the abstract and stylistic nuances present in artwork. Their approach is not only state-of-the-art in image classification tasks but also aligns with the broader goals of this course in terms of exploring emerging architectures for real-world applications. The dataset used in our project is a carefully selected and preprocessed subset of the WikiArt collection, which includes 500 paintings evenly distributed across four artistic styles. All images were resized to 224x224 pixels and normalized. Labels were encoded using one-hot encoding, and the data was split into training and validation sets. Preliminary visualizations using PCA confirmed a degree of separability among the styles, with clusters forming in reduced-dimensional space. These visual groupings suggest that the model detects stylistic differences even at the feature level. Our primary goal is to evaluate the comparative effectiveness of three deep learning architectures—ResNet50, EfficientNetB0, and ViT—on this task. In addition to these deep models, we employ logistic regression and random forest classifiers as baselines using bottleneck features. All models are evaluated using accuracy and F1-score, with a special emphasis on F1 due to the artistic domain’s sensitivity to false negatives, which could lead to the misrepresentation of styles. Throughout the study, we utilize interpretability tools such as Grad-CAM to visualize what parts of each image the model attends to during classification. We also plot confusion matrices to understand better which classes are often confused. In the event of underperformance, we incorporate augmentation techniques and learning rate adjustments to improve results and consider ensemble strategies combining CNN and Transformer outputs. This methodologically diverse setup ensures that our project rigorously explores the advantages and limitations of both traditional convolutional and modern transformer-based vision models. TAKE-HOME DELIVERABLES We fine-tuned a ResNet50 model pretrained on ImageNet using a subset of 500 images from the WikiArt dataset, covering four distinct artistic genres. The model was trained for 10 epochs with a batch size of 8, and training performance was tracked across both training and validation sets. Accuracy Trends The training accuracy started at 80.86% in the first epoch and remained relatively stable, oscillating slightly but generally between 79% and 82%. The validation accuracy began at 80.0% and consistently rose to 81.0%, indicating solid generalization performance on unseen images. This consistency demonstrates the effectiveness of the pretrained ResNet50 model, even when applied to complex, highly subjective art images with stylistic variance. Loss Behavior Training loss increased modestly from 0.6061 to 0.6807, suggesting slight overfitting or convergence saturation. However, the validation loss decreased progressively from 0.6113 to 0.5955, which is a positive signal of improved generalization. The decreasing validation loss and rising validation accuracy confirm that the model was learning meaningful patterns rather than simply memorizing training examples. Training Dynamics As visualized in the training and validation curves, the training loss and accuracy curves remained smooth, with no abrupt divergence. The validation accuracy plateaued early but remained high, and the loss curve continued to decline, showing healthy optimization behavior. No early stopping was triggered, and performance remained steady throughout the 10 epochs. These results suggest that ResNet50 performs reliably in the context of artistic image classification, achieving over 81% validation accuracy on a limited dataset. Its pretrained convolutional filters seem to transfer well to the domain of paintings, capturing stylistic nuances despite their abstract and diverse nature. FUTURE DELIVERABLES 1. Train and Evaluate EfficientNet and ViT Models We will implement EfficientNetB0, known for its parameter efficiency, and Vision Transformer (ViT), which captures long-range dependencies using self-attention. The same dataset and preprocessing pipeline will be used to ensure fair comparison across architectures. Metrics such as validation accuracy, F1-score, and confusion matrix will be collected to compare model performance. 2. Side-by-Side Model Comparison We will compare all three models (ResNet50, EfficientNetB0, ViT) based on accuracy and F1-score, computational efficiency and visual interpretability like Grad-CAM REFERENCES [1] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). [2] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NeurIPS). [3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., & Houlsby, N. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. International Conference on Learning Representations (ICLR). [4] He, T., Zhang, L., & Wang, Y. (2023). Integrating Contextual Knowledge to Visual Features for Fine Art Classification. arXiv preprint. Dataset: https://www.kaggle.com/datasets/steubk/wikiart/data
CSE5023: Recent Advances in Deep Learning Assignment 1: Survey Paper on Foundation Models 1 Objective The primary objective of this assignment is to delve into the realm of foundation models, specifically focusing on a direction that is intricately related to your own research. This task aims to foster a deeper understanding of the subject matter, encourage critical thinking, and stimulate original research ideas. 2 Requirements 2.1 Topic Selection Choose a specific direction within the broader scope of foundation models that aligns closely with your research interests. This could be in the domain of AI for X, where X represents your particular research focus. 2.2 Content Structure • Abstract: Provide a concise summary of the survey paper, outlining the main themes, findings, and contri- butions. • Introduction: Introduce the topic, its relevance to your research, and the significance of foundation models in the chosen domain. • Literature Review/Methods: Present a comprehensive review of existing literature related to the selected topic. Discuss various methods, techniques, and models that have been employed in previous studies. • Discussion: Analyze the strengths and weaknesses of the current state-of-the-art, identify gaps in the research and discuss potential areas for improvement. • Possible Further Directions: Propose novel research directions that could advance the field. These should be informed by your analysis and aligned with your research interests. • Preliminary Experiments (Optional): If applicable, include results from any preliminary experiments or simulations that support your research directions. 2.3 Length and Formatting • The survey paper must be a minimum of six pages long, excluding the references. • Follow a standard academic formatting style, including proper citations and references. 2.4 Originality and Plagiarism Policy • The survey must be entirely your own work. Direct copying from any source, including ChatGPT or its variants, is strictly prohibited. • While you may use ChatGPT as a tool for refining language, the final content must be original and reflect your own research thoughts. • Any instance of direct copying from ChatGPT or other sources will be treated as plagiarism. • The consequences of plagiarism will be determined according to the rules and regulations of the department and the university. 3 Evaluation Criteria The assignment will be marked out of 80 marks, with the following breakdown: • Relevance to your research direction: 10 marks. • Depth of literature review and understanding of methods: 15 marks. • Quality of discussion and analysis: 15 marks. • Originality of proposed further directions: 15 marks. • Clarity of writing and presentation: 25 marks. Additionally, the report will be evaluated for originality, contributing to a separate score of 20 marks. 4 Submission instructions To ensure a smooth and organized submission process for Assignment 1, please adhere to the following detailed instructions. 4.1 Preparing Your Submission • Compilation of Files: • Gather all the necessary files related to Assignment 1, which should include both the LaTeX source files (zipped) and the compiled PDF document. • Ensure that your LaTeX source files are well-organized and compiled without errors. • Creation of ZIP Archive: • Once you have all the files ready, create a ZIP archive to bundle them together. • This can typically be done by right-clicking on the folder containing your files and selecting the “Com- press” or “Send to ZIP” option, depending on your operating system. 4.2 Naming the ZIP Archive • Format: • It is crucial to name the ZIP file in a specific format to facilitate easy identification. • The file name should be structured as follows: studentnumber assignment1.zip. • For instance, if your student number is 123456, the file name should be 123456 assignment1.zip. • Any deviation from the specified format may result in delays or issues with the processing of your submission. 4.3 Submission Platform Please submit the archive that includes all your files through Blackboard. The deadline for assignment 1 (all parts and all tasks) is 25 May 2025 at 23:55 (Beijing Time).
5. Neural Networks and Machine Learning ECO374H1 Department of Economics Summer 2025 Artificial Neural Networks I Artificial neural networks (ANNs) are models that allow complex nonlinear relationships between the response variable and its predictors I A neural network is composed of observed and unobserved random variables, called neurons (also called nodes), organized in layers I The observed predictor variables form the "Input" layer, and the predictions form the "Output" layer I Intermediate layers contain unobserved random variables (so-called ìhidden neuronsî) Special Case: Linear Regression Model I The simplest ANN with no hidden layers is equivalent to a linear regression: I In the ANN notation, the formula for the fitted regression model is y = a + w1x1 + w2x2 + w3x3 + w4x4 I The parameters wk attached to the predictors xk are called weights, and the intercept a is called a bias Nonlinear Neural Networks I Once we add intermediate layer(s) with hidden neurons and activation functions, the ANN becomes non-linear I An example shown in the following figure is known as the feed-forward network (FFN) I The weights wk ,j are selected in the ANN framework using a machine learning algorithm that minimizes a loss function, such as the Mean Squared Error (MSE), or Sum of Squared Residuals (SSR) I In the special case of linear regression, OLS provides an analytical solution to the learning algorithm that minimizes SSR I In general ANNs the response variable is a nonlinear function of the predictors and hence OLS is not applicable I A neural network with many hidden layers is called a Deep Neural Network (DNN) and its training algorithm is called Deep learning Feed-Forward Network I In FFN each layer of nodes receives inputs from the previous layers I The inputs to each node are combined using a weighted linear combination I The result is then modified by a nonlinear "activation" function before being output I The outputs of the nodes in one layer are inputs to the next layer I In the Figure above, the inputs (blue dots) into hidden neuron j are combined linearly as I At each hidden neuron (green dot), a nonlinear activation function s(zj ) is applied, and the model prediction is then obtained as Activation Function I The activation function s(zj ) adds áexibility and complexity to the model I Without the activation function the model would be limited to a linear combination of predictors (multiple regression) I Popular activation functions are the logistic (or sigmoid) function or the tanh function Activation Function FFN Model I Consider the general case of: I K predictors I J hidden nodes in one hidden layer I The functional form of zj from (1) I logistic activation function (3) I The FFN model can be then expressed as I Note that the model is nonlinear in xi ,k due to the presence of the nonlinear activation function ANN Model Formulation I For building an ANN model, we need to pre-specify in advance: I The number of hidden layers I The number of nodes in each hidden layer I The functional form of the activation function I The parameters a , a1 , . . . , aJ and w1 , . . . , wJ and w11 , . . . , wKJ are ìlearnedî from the data Training Neural Networks I Training a network on data involves searching for the set of weights that best enable the network to model the patterns in the data I Training (or learning) presents the network with data to modify the weights I The goal of a learning algorithm is typically to minimize a loss function that quantiÖes the lack of Öt of the network to the data Learning I Supervised learning (our focus) I We supply the ANN with inputs and outputs, as in the examples above I The weights are modiÖed to reduce the di§erence between the predicted and actual outputs using a loss function I Examples: NN (auto)regression, face, speech, or handwriting recognition, spam detection I Unsupervised learning I We supply the ANN with inputs only I The ANN works only on the input values so that similar inputs create similar outputs I Examples: K-means clustering, dimensionality reduction Forward propagation in Supervised Learning of FFNs I The process of forward propagation in FFNs involves: I Computing zj , as in (1), at every hidden neuron j I Applying the activation function s (zj) at each j , as in (3) I Constructing a linear combination of s (zj) to obtain the predicted output I Once the predicted output is obtained at the output layer, we compute the loss or "error" (predicted output minus the original output) Backpropagation in Supervised Learning of FFNs I The goal of backpropagation is to adjust the weights in each layer to minimize the overall error (loss) at the output layer I One iteration of forward and backpropagation is called an epoch I Typically, many epochs, (often tens of thousands) are required to train a neural network well
4. Nonlinear Models ECO374H1 Department of Economics Summer 2025 Linearity vs Nonlinearity I Consider a stochastic process {Yt } and an information set It -1 = {Yt -1 1 Yt -2 1 ...1 Xt -1 1 Xt -2 1 ...} I We say that {Yt } is linear in mean if the conditional mean of Yt is a linear function of It -1 1 that is E [Yt jIt -1] = φ1 Yt -1 + φ2 Yt -2 + ... + β1 Xt -1 + β2 Xt -2 + ... I Under this definition, the ARMA(p1 q) models we have analyzed are linear models I We say that {Yt } is nonlinear if the conditional mean of Yt is a nonlinear function of It -1 I By nonlinear we mean any functional form that is not linear Threshold Nonlinearity I As an example of a nonlinear model, suppose that {Yt } behaves di§erently for Yt > 0 than for Yt ≤ 0 : where φ11 ≠ φ21 I Equivalently Yt = φ11 Yt -1I(Yt -1 > 0) + φ21 Yt -1I(Yt -1 ≤ 0) + ε t (1) where the indicator function I(A) = 1 if statement A is true and I(A) = 0 otherwise I This model is called the self-exciting threshold autoregressive (SETAR) model with threshold value 0 I Here {Yt } is a mixture of two separate linear processes, which together constitutes a nonlinear process I For a simulation from the SETAR process, see code file 4a. SETAR Simulation Nonlinear Dynamics in Data: U.S. Industrial Production I U.S. industrial production (year-to-year changes) exhibits threshold nonlinearity between recessions (shaded) and expansions Nonlinear Dynamics in Data: 3-month T-bills I The U.S. 3-month Treasury bills rate can be viewed as a mixture of two processes: random walk for low rates and a stationary process for high rates Linearity vs Nonlinearity I ACF and PACF only measure linear dependence in {Yt} I ARMA type models only exploit linear dependence in {Yt} for forecasting I As such, they can be viewed as first-order linear approximations to nonlinear processes I For detecting and modeling nonlinear dependence we mainly rely on statistical testing and forecasting performance measures Nonlinear Models I We will analyze the following types of nonlinear models: 1. Threshold models 2. Smooth transition models Threshold Autoregressive Process (TAR) I The Threshold Autoregressive Process (TAR) takes the general form. I The model contains r regimes I There is a separate AR(p) process in each regime I The threshold variable xt can be any variable in or outside of the model I When xt is any lag of Yt 1 the TAR process is called self-exciting (SETAR) SETAR I In (1), we have introduced SETAR(1) with xt = Yt -11 c1 = 01 and c2 = ∞ I Using the TAR notation of (2) with intercepts, the SETAR(1) model becomes I Using D = I(Yt -1 ≥ c1) = 1 - I(Yt -1 < c1) rewrite the model (3) as where φ-0 = φ20 - φ10 and φ-1 = φ21 - φ11 SETAR I The model (4) corresponds to a regression model I Assuming ε t is iid 1 we can use OLS to regress Yt on Yt -1 , D 1 and Yt -1 D I We can test for linearity with H0 : 0 = 0 and φ-1 = 01 which is equivalent to H0 : φ20 = φ 10 and φ21 = φ11 I For application to 3-month U.S. Treasury bill rates, see code file 4b. SETAR Application Smooth Transition I The TAR model assumes that the shift from one regime to the next happens abruptly, due to the binary nature of the indicator function I(Yt -1 ≥ c) I In many cases we would expect a smooth transition from one state to the next, e.g. in macroeconomic variables such as GNP I The Smooth Transition Autoregressive Model (STAR) can be speciÖed as Yt = φ0 + φ11 Yt -1 + φ12 Yt -2 + ... + φ1p Yt -p + (φ 1 + φ21 Yt -1 + φ22 Yt -2 + ... + φ2p Yt -p ) G (st, g , c) + ε t where G (st, g , c) is a transition function that is bounded, and continuous in a transition variable st (typically lags of Yt ) I The parameter g captures the speed of transition and c is a threshold parameter I A popular special case is the STAR(1) model Yt = φ0 + φ11 Yt -1 + (φ1 + φ21 Yt -1)G (st, g , c) + ε t
MTH 223: Mathematical Risk Theory Tutorial 3 Part I 1. For models involving general liability insurance, actuaries at the In-surance Services Office once considered a mixture of two Pareto distri-butions. They decided that five parameters were not necessary. The distribution they selected has c.d.f. Note that the shape parameters in the two Pareto distributions differ by 2. The second distribution places more probability on smaller val-ues. This might be a model for frequent, small claims while the first distribution covers large, but infrequent claims. Determine the mean and the second moment of the above two-point mixture distribution. 2. Let X be a random variable with a Pareto distribution PAR(α, θ). Let Y = ln(1 + X/θ). Determine the name of the distribution of Y and identify its parameters by looking up the Distribution Table. 3. Suppose that X | Λ has a Weibull survival function , x > 0, and Λ has an exponential distribution with mean of θ > 0. Demonstrate that the unconditional distribution of X is loglogistic with a pdf as follows 4. Suppose that X is a random variable with p.d.f. Determine the p.d.f. and c.d.f. of .
3. Autoregressive (AR) Model ECO374H1 Department of Economics Summer 2025 Cycles ρt within A (or B or C of D) is positive (lag of 1 to 2 years) ρt of Yt between A and B (or C and D) is negative (lag of 4 to 5 years) ρt of Yt between A and D is positive (lag of 10 to 11 years) The ACF of the Unemployed persons data changes between positive and negative, which is typical for a cycle (see code Öle 3a. AR Motivation) Model ACF We would like to fit on the data a time series model that can closely approximate the dynamic pattern in the data We will show that the Autoregressive (AR) process has such ACF Hence, an AR model component will be suitable to fit to the data and for forecasting of the series Note the contrast with the ACF of the MA model discussed previously AR Model An autoregressive model of order p 1 denoted AR(p)1 is given by where {εt} is the white noise process We will start with AR(1) and then extend the analysis to AR(p) For each process, we will ask three questions: What does a time series of the given AR process look like? What does its ACF look like? What is the optimal forecast? AR(1) For simulated data from the AR(1) process see code file 3b. AR1 Simulation (section 1. Simulated Data) The parameter φ is called the persistence parameter since it ináuences the "persistence" of the series The series with φ = 0.95 stays longer above or below the unconditional mean than the series with φ = 0.4 The series with φ = 1 is extremely persistent, in fact it is non-stationary AR(1) is stationary only for jφj 1 For ACF and PACF of the AR(1) process see code file 3b. AR1 Simulation (section 2. ACF and PACF) The same features as for positive φ also hold for negative φ but with alternating signs (section 3. Negative φ) Forecast for h = 1 The optimal forecast of the AR(1) model is equal to the conditional expectation: For the forecasting horizon h = 1, Since Yt ∈ It, Forecast Error for h = 1 The 1-perod ahead forecast error is The forecast variance is Density Forecast for h = 1 Assuming , the density forecast is The 95% confidence interval is then where 1.96 is the 95% critical value from the Normal distribution Forecasts for h > 1 The optimal forecast for h > 1 is Similarly, we can show that Forecasts for h → ∞ As h → ∞, the forecast converges to which is the unconditional mean of {Yt}, and which is the unconditional variance of {Yt} Hence, the AR(1) model is suitable for forecasts in the short to medium term Convergence of its forecasts to unconditional moments still indicates "short" memory of the process, albeit relatively longer than for MA(1) Note that these results hold only for stationary AR(1) with |φ| < 1
MGF1100: Managerial Communication Assessment 4: Job search & online profile Due Date: Monday 26 May 2025 by 9:30am Weighting/Value: 40% Details of Task: Students will be asked to search and apply for a job within their field of study and create an online profile on LinkedIn. Assessment requirements: You must include the following sections in your assignment: 1. A title page which includes the title of the assessment, name and due date. 2. A copy of the chosen job (copy and paste the advertised job and the link to the position). This is a requirement of the assessment. In addition, a brief justification of your reasons for selecting this vacancy (maximum 100 words). 3. A cover letter applying for the chosen job (maximum one page). 4. An updated resume (maximum two pages). 5. Interview preparation ● Using Microsoft Copilot, generate five questions that could be asked in the interview for the specific job you have chosen. Include screenshots of the prompts you have entered in Copilot and its responses in an appendix. Explain how you structured your prompts and how you refined Copilot’s responses to arrive at the final five questions. ● Prepare responses for the five questions. Justify using support from relevant literature, your rationale for how you would answer these questions. In other words, do NOT merely answer these questions, but explain how you would go about planning a response to these questions in an interview. Use Copilot to refine your answers. You will need to include screenshots of your prompts and Copilot’s responses in an appendix. Explain how you structured your prompts and how you refined Copilot’s responses to arrive at your final answers to the interview questions (maximum 200 words per question; 1000 words in total). 6. The link to your updated LinkedIn profile. Use the following link as starting point: https://www.linkedin.com/help/linkedin/answer/112133/how-do-i-create-a-good-linkedin-profile-?lang=en Other requirements: ● You are encouraged to use Copilot to generate and/or refine the cover letter and resume. Note that the contents of your cover letter and resume must be real. Hence, it is important for you to feed Copilot your own information. Please do not provide Copilot with your personal information (e.g. your address, phone number, email address). You must include the screenshots of the prompts and Copilot’s responses in an appendix. Further information on how to do this will be discussed in tutorials. ● A minimum of 5 relevant references are to be used (section 5). ● Use APA 7 reference style. throughout the report and reference list. ● Use Times New Roman or Arial 12-point font (you can decide what line spacing to use). ● All these parts for the assignment must be presented in one file (Word file only), using page breaks to delineate the different sections plus the references and appendices. Apart from the copy of the job (copy and paste), all other sections must be written in Word format, and you must NOT use resume templates. ● Please ensure that you check your grammar and spelling in your assessment before submission. ● Please check Moodle for the rubric and other relevant information. ● The final assessment must include the following sections (each section MUST start on a new page): o Title page o Screen shot of the WHOLE job, with link and justification o Cover letter o Resume o Interview questions and answers and hyperlink to your LinkedIn profile o References o Appendices
Assignment Remit Programme Title BSc Economics Module Title LC Applied Economics and Statistics Module Code 08 29165 Assignment Title Group project Level LC Weighting 30% Hand Out Date 20/03/2025 Deadline Date & Time 08/05/2025 12pm Feedback Post Date 9th June 2025 Assignment Format Report Assignment Length 1500 words Submission Format Online Team Module Learning Outcomes: This assignment is designed to assess the following module learning outcomes. Your submission will be marked using the Grading Criteria given in the section below. LO 1. demonstrate knowledge and understanding of basic statistical methods; LO 2. apply economic knowledge to real world data; LO 3. use Microsoft Excel to manipulate, analyse and display economic data in graphical form. Assignment: Please find details in the attached AES Group Project 2025.docx. Grading Criteria / Marking Rubric Your submission will be graded according to the following criteria: 1. The validity of the research question 2. Articulation of the empirical/theoretical model. 3. Quality of data collection. 4. Quality of data analysis. 5. Interpretation of the results. See the marking rubric at the end of the remit for more information on how your work will be marked and graded. Ethical Use of Generative AI (GenAI) You are permitted to use GenAI to support your submission for this assessment. You may use it for the following activities: • Researching and refining your ideas • Information retrieval or background research • Drafting an outline to organise or summarise your thoughts • Refining research questions • Checking spelling and grammar Applying GenAI tools should be done with human oversight and control. You should carefully review and use the results carefully as AI can generate authoritative-sounding output that can be incorrect, incomplete, uncritical, or biased. You may not submit any work generated by an AI tool as your own. Where you include any material generated by an AI tool, it should be properly declared just like any other reference material. Alongside your assignment you should also provide a commentary in the Cover Sheet detailing how GenAI has been used to develop your final submission. If you have not used GenAI tools, you should clearly state so. Plagiarism, including that which results from using GenAI, is a form. of academic misconduct that will be dealt with under the University’s Code of Practice on Academic Integrity. https://intranet.birmingham.ac.uk/as/registry/policy/conduct/plagiarism/index.aspx University guidance on ethical use of GenAI can be found here: https://intranet.birmingham.ac.uk/as/libraryservices/asc/student-guidance-gai.aspx
COMP5048 Assignment 2 COMP5048 Visual Analytics 2025 Assignment 2: Group Assignment Deadlines: ● Presentation: Week 11, May 14 WED 11:59pm (Oral Presentation: Week 11-13, 6-9pm) ● Final Report/Individual Report: Week 13, May 29 THU 11:59pm Choose one data set and produce good visualisations to support analytic tasks of the data. 1. Youtube Trending Videos (https://www.kaggle.com/datasets/canerkonuk/youtube-trending-videos- global) 2. IMDB Movies (https://www.kaggle.com/datasets/asaniczka/tmdb-movies-dataset-2023-930k- movies) 3. Amazon Kindle Books (https://www.kaggle.com/datasets/asaniczka/amazon-kindle-books-dataset- 2023-130k-books) 4. THE World University Rankings (https://www.kaggle.com/datasets/r1chardson/the-world- university-rankings-2011-2023) 5. World Weather Repository (https://www.kaggle.com/datasets/nelgiriyewithana/global-weather- repository) 6. Australian Fatal Road Accident (https://www.kaggle.com/datasets/deepcontractor/australian-fatal- car-accident-data-19892021) 7. Historical Olympic Dataset (https://www.kaggle.com/datasets/muhammadehsan02/126-years-of- historical-olympic-dataset) 8. Spotify High & Low Popularity Tracks (https://www.kaggle.com/datasets/solomonameh/spotify- music-dataset) 9. ATP Professionals Matches (https://www.kaggle.com/datasets/gmadevs/atp-matches-dataset) 10. DBLP Citations (https://www.kaggle.com/datasets/agungpambudi/research-citation-network-5m- papers) Group Work Instructions: For the selected data set: 1. Design ○ Tasks: Define significant and meaningful tasks based on various aspects of the data: ■ Simple tasks: Overview, simple statistics, e.g., ranking. ■ Middle-level tasks: Identify clusters, correlation, similarity. ■ Complex tasks: Identify relationships, temporal dynamics, comparison. ○ Data processing: Extract subsets from the selected data for each data type: ■ High-dimensional data ■ Graph data ■ Dynamic data ○ Analysis: Analyse these data to support Tasks ○ Visualisation: Visualise each data type to support Visual Analysis 2. Implementation: You can: ○ Use any existing tools ○ Design/implement new algorithms/methods ○ Design/implement a new visual analytic system ○ You must acknowledge all your sources 3. Evaluation: Evaluate how effectively your visualisations support Tasks: ○ Visual analysis, Storytelling, Pros/Cons 4. Demo/Animation of your group’s system/visualisation as a movie 5. For Presentation and Final Report: ○ Each group should extract at least one subset per data type ○ Each group should have at least one collaborative work (for data processing, analysis, visualisation, implementation, demo/animation) ○ Each student should create at least one visualisation ○ Each visualisation should be significantly different from others 6. For Individual Report: ○ Each student should extract at least one different subset per data type ○ Each student should create at least one visualisation per data type ○ The subset/task/visualisation should be significantly different from others in the same group 7. Report Writing: use the correct terminology consistent with the lecture notes Submission Instructions: 1. Group Presentation (10 marks): Canvas -> Assignment 2 Presentation ● Only one submission per group ● Submit presentation slides in PDF format ● Submit any animation/demo movie in one mp4 Min 11 – Max 15 slides (for 7-8 min presentation): must use the following format/titles: 1. Data set 2. Tasks 3. Data Processing 4. Analysis 5. Visualisation 6. Implementation 7. Evaluation: 1 slide for group work + 3 slides (visualisations of each group member) 8. Planning: plan for weeks 12-13 Marking Rubric: 10 marks ● Quality of Design: task, data processing, analysis, visualisation (3 marks) ● Quality of Implementation (2 marks) ● Quality of Results: Visual analysis, Storytelling (2 marks) ● Quality of Oral Presentation (1.5 marks) ● Quality of System Demo/Animation (1.5 marks) ■ Oral presentation (Week 11-13, 6-9pm): We will assign 20 groups per week. ■ Note: We will use the PDF slide/video submitted at Week 11 (No update allowed). 2. Final Group Report (30 marks): Canvas → Assignment 2 Group Report ● Only one submission per group ● Submit group report + cover page (declare individual contribution with signature) as one PDF ● Submit any animation/demo movie in one mp4 ● Submit source codes in a zip file Min 20 pages in the following format/titles: 1. Introduction 1.1. Data set 1.2. Summary of Contribution 2. Design 2.1. Tasks 2.2. Data Processing 2.3. Analysis 2.4. Visualisation 3. Implementation 4. Evaluation 4.1. Results (for each visualisation) 4.1.1. Visualisation 4.1.2. Visual Analysis, Storytelling 4.1.3. Pros/Cons/Comparison 4.2. Discussion: Summary, Limitation 5. References 6. Appendix (not included in page limit): 6.1. Weekly M eeting M inutes : attendance, discussion, plan (0.5 page per week: week 7 - 12) 6.2. Code Marking rubric: 30 marks ● Quality of Design: tasks, data processing, analysis and visualisation (6 marks) ● Quality of Implementation (6 marks) ● Quality of Results: visual analysis, storytelling, discussion (9 marks) ● Quality of Writing (5 marks) ● Quality of System Demo/Animation (4 marks) 3. Individual Report (20 marks): Canvas → Assignment 2 Individual Report ● One submission per student ● Submit your individual report with individual cover page as one PDF ● Submit any animation/demo movie in one mp4 ● Submit source codes in a zip file Min 10 pages in the following format: 1. Introduction 1.1. Data Set 1.2. Summary of Contribution 2. Design 2.1. Tasks 2.2. Data Processing 2.3. Analysis 2.4. Visualisation 3. Implementation 4. Evaluation 4.1. Results (for each visualisation) 4.1.1. Visualisation 4.1.2. Visual Analysis, Storytelling 4.1.3. Pros/Cons 4.2. Discussion: Summary, Limitation 5. References 6. Appendix (not included in page limit): 6.1. Individual reflection: progress and plan (3 lines per week: week 7-12) 6.2. Code Marking rubric: 20 marks ● Quality of Design: tasks, data processing, analysis and visualisation (4 marks) ● Quality of Implementation (3 marks) ● Quality of Results: visual analysis, storytelling, discussion (6 marks) ● Quality of Writing (4 marks) ● Quality of System Demo/Animation (3 marks)
Coursework Assignment Brief Academic Year 2024-25 Module Title: Commercial Quantification and Cost Module Code: BNV5107 Assessment Title: Quantification and Cost Assessment Type: Measurement Weighting: 100 % School: School of Engineering and the Built Environment Module Co-ordinator: ANGELA KILBY Hand in details: For all work submitted to moodle, the submission time is 3.00pm. If the assessment includes a presentation or other part not submitted to moodle, please see the assessment section of the module moodle page for details. Return of Feedback date and format 20 working days from date of submission (see Moodle for details). Support available for students required to submit a re-assessment: Timetabled support sessions will be arranged for the period immediately preceding the hand-in date. NOTE: At the first assessment attempt, the full range of marks is available. At the re-assessment attempt the mark is capped and the maximum mark that can be achieved is 40%. Assessment Summary Groundwork measure IMPORTANT STATEMENTS Undergraduate Regulations Your studies will be governed by the BCU Academic Regulations on Assessment, Progression and Awards. Copies of regulations can be found at https://www.bcu.ac.uk/student-info/student-contract For courses accredited by professional bodies such as the IET (Institution of Engineering and Technology) there are some derogations from the standard regulations, and these are detailed in the academic regulations. Cheating and Plagiarism Both cheating and plagiarism are totally unacceptable, and the University maintains a strict policy against them. It is YOUR responsibility to be aware of this policy and to act accordingly. Please refer to the Academic Registry Guidance at https://icity.bcu.ac.uk/Academic-Services/Information-for-Students/Assessment/Avoiding-Allegations-of-Cheating The basic principles are: · Don’t pass off anyone else’s work as your own, including work from “essay banks”. This is plagiarism and is viewed extremely seriously by the University. · Don’t submit a piece of work in whole or in part that has already been submitted for assessment elsewhere. This is called duplication and, like plagiarism, is viewed extremely seriously by the University. · Always acknowledge all of the sources that you have used in your coursework assignment or project. · If you are using the exact words of another person, always put them in quotation marks. · Check that you know whether the coursework is to be produced individually or whether you can work with others. · If you are doing group work, be sure about what you are supposed to do on your own. · Never make up or falsify data to prove your point. · Never allow others to copy your work. · Never lend disks, memory sticks or copies of your coursework to any other student in the University; this may lead you being accused of collusion. By submitting coursework, either physically or electronically, you are confirming that it is your own work (or, in the case of a group submission, that it is the result of joint work undertaken by members of the group that you represent) and that you have read and understand the University’s guidance on plagiarism and cheating. You should be aware that coursework may be submitted to an electronic detection system to help ascertain if any plagiarised material is present. You may check your own work prior to submission using Turnitin at the Formative Moodle Site. If you have queries about what constitutes plagiarism, please speak to your module tutor or the Centre for Academic Success. Electronic Submission of Work It is your responsibility to ensure that work submitted in electronic format can be opened on a faculty computer and to check that any electronic submissions have been successfully uploaded. If it cannot be opened it will not be marked. Any required file formats will be specified in the assignment brief and failure to comply with these submission requirements will result in work not being marked. You must retain a copy of all electronic work you have submitted and re-submit if requested. Learning Outcomes to be Assessed: 1 Apply software to measure, bill and price different work, works sections by the appropriate standard method and appropriate specification. 2 Produce a working process that is methodical, logical, accurate and sequential using appropriate annotations and workings to enable others to understand the work that has been done. 3 Analyse, assess and implement a strategy to manage the impact of inaccurate design, specification and missing information on the project cost. Assessment Details: TASK 100% Title: Groundwork tender package Style.: Priced tender package, scope of work including activity schedule for payment Rationale: As an undergraduate QS you will be expected to continually practice the skill of measurement and cost. ------------------------------------------------------------------------------------------------------------------------------------ Description: Task 1: Kier have just been awarded the contract to procure and build a new multi-million-pound scheme to ensure fish have an easier route to swim along the River Severn. Unlocking the Severn, the group behind the £19.7 million project, says it is one of the largest river restorations of its kind ever attempted in Europe. It will see 158 miles of the river reopened to fish, by creating routes around physical barriers, namely weirs that currently prevent migration to critical spawning grounds. The aim of the project is to secure the long-term future of many of the UK’s declining and protected fish species. State-of-the-art fish passes will be installed on four navigation weirs on the River Severn. As the subcontractor JJF Limited you have secured the Groundworks package on one of the four fish passes. Your fish pass is located in Lincomb. Site mobilisation is due July 2025. Kier have programmed your works over a 10-week period. Assessment Details: Excel – Measure the excavation to the fish pass, and in the same document produced a priced Bill of quantities and activity schedule for the works job as a Groundworks subcontractor. Please see document titled ‘Supplementary Information Fish Pass’ for details, drawing, notes and guidance. · Your measurement approach – annotate the measure and link to the drwg (30%) · Priced Bill of Quantities applying NRM2 descriptions (40%) · Activity Schedule for payment (20%) · Operational Schedule (10%) Additional information: The nature of this module and assessment lends itself to in class assistance each week. Some essential reading: Ashworth, A. (2004) Cost Studies of Buildings, 4th ed., Pearson Prentice Hall: London Cartlidge, D (2012) Quantity Surveyor's Pocket Book, 2nd Edition, Oxon, Routledge Cartlidge, D (2013) Estimator’s Pocket Book, Oxon, Routledge Cartlidge, D. (2006) New aspects of quantity surveying practice, Butterworth-Heinemann. Murray, M., Langford, D., (2004) Architects Handbook of Construction Project Management, RIBA Publishing: London Morton, R. and Jagger, D. (1995) Design and the Economics of Building, E & FN Spon, London, UK Ostrowski, S (2013) Measurement Using the New Rules of Measurement, West Sussex, John Wiley & Sons Ltd RICS (2012) New Rules of Measurement 2 (NRM2) Detailed Measurement for building works 1st Edition, RICS, London Willis C J, Willis A, Trench W, Lee S (2014) Willis’s Elements of Quantity Surveying, West Sussex, John Wiley & Sons Ltd For advice on writing style, referencing and academic skills, please make use of the Centre for Academic Success: Centre for Academic Success - student support | Birmingham City University (bcu.ac.uk) Workload: Overall your coursework for this module is equivalent to 3000 words Overall this module will take 30 hours of your own time to complete the task set +10% is allowed for each Coursework Transferable skills: Measurement, logical sequencing of measurement approach, understanding of construction technology, appreciation of how design information is recorded to assist in the creation of quantities, awareness of different approaches to measurement, insight into how measurement and package management influences subcontractor payments and IT skills. Marking Criteria: Task Assessment Criteria à 1. Apply software to measure, bill and price different work, works sections by the appropriate standard method and appropriate specification. 2. Produce a working process that is methodical, logical, accurate and sequential using appropriate annotations and workings to enable others to understand the work that has been done. 3. Analyse, assess and implement a strategy to manage the impact of inaccurate design, specification and missing information on the project cost. Grading Criteria 0 – 29% F Failed to submit or Incomplete submission of significant areas Demonstrates no understanding of the task set 30 – 39% E Generally: Incomplete task, incorrect/unprofessional pricing document, poor or no description of work, no specification, incorrect unit of measurement, poor approach to measure, and no annotation. 40 – 49% D Generally: Incomplete task, incorrect/unprofessional pricing document/scope of works, missing information, poor use of NRM2 work descriptions, incorrect units, incorrect approach to measure, and some missing annotation 50 – 59% C Generally: Some aspects of the task are incomplete, incorrect/unprofessional pricing document, information, poor NRM2 descriptions, incorrect units, incorrect approach to the measure, and some missing annotation 60 – 69% B Generally: Tasks complete, accurate, good NRM2 descriptions used, pricing document, units incorrect approach to measure, missing annotation 70 – 79% A Generally: Tasks complete, accurate, good/excellent description of the work, clarity of the information presented, pricing document/scope of work, measure, annotation, units. 80 – 89% A+ Tasks complete, accurate, excellent descriptions use, pricing document/activity schedule, measure, annotation, units, clear assumptions relating to measure, and annotation linked to drawing 90 – 100% A* Excellent Applied experience or demonstrating the relevant application of knowledge. Excellent presentation Thought-out activity schedule and operational strategy, clear presentation of complex data, clear management of commercial strategy, and clear presentation and statement to the Client, good client management.
CMPM 121 - Game Development Patterns Quarter: Spring 2025 COURSE INFORMATION In this course, we will discuss how to build and organize large software projects in a way that will feel friendly and familiar to designers and programmers who have never seen it before. One aspect for doing this is the use of familiar tropes, or design patterns, which we will talk about in this class. Another is to avoid anti-patterns, which we also be covering. To provide you with hands-on experience in working on a (somewhat) large-scale project, you will be adding different "modules" to a game throughout the quarter. The first 3 of these modules will be done in teams of two, while the last (and largest) will be done in teams of 4. LEARNING OUTCOMES By the end of this class students should be able to: 1. Describe what software design patterns are and why they are useful 2. Define and apply the covered design patterns in an implementation 3. Analyze given software design problems and identify suitable design patterns 4. Apply software engineering practices when developing software 5. Identify code structure problems in their own and other people's code 6. Apply refactoring techniques to resolve code structure problems PREREQUISITES/COREQUISITES Course CMPM 120 (Game Development Experience) is a prerequisite. Game Development Patterns builds upon the game programming knowledge students develop in CMPM 120, and expects entering students to have substantial expertise writing game software for a 2D framework in a high-level language. Experience using Unity is a plus, but not required. REQUIRED MATERIALS, TEXTBOOKS AND TECHNOLOGY This class has no set textbook, but it makes extensive use of readings available on the web. The readings include blogs, videos of game play, conference talks, and primary research articles. Two textbooks that may sometimes appear as a reference are: · Robert Nystrom: Game Programing Patterns (freely available online (https://gameprogrammingpatterns.com/)) · Martin Fowler: Refactoring Improving the Design of Existing Code (website (https://refactoring.com)) COMMUNICATION Our primary communication platform. for the class will be Discord. However, as Discord is owned by a private entity not affiliated with the university, please keep all confidential communication (grades, DRC accommodations, medical absences, etc.) to email: [email protected] (mailto:[email protected]) ASSIGNMENTS & ASSESSMENT GRADING POLICY We are here to learn, not to get or give a grade. Grading is a necessity of the systems in which we operate, and I will try to make it work for your learning, not the other way around. The course is structured so that you can earn up to 100 points: · Assignment 0 (Unity tutorial): 5 points · Assignments 1-3: 3*15 points = 45 points · Assignment 4a (getting started): 5 points · Assignment 4: 25 points · Reading quizzes: 20 points The points will then be converted to letter grades according to the standard grading scale: A+ 97-100 A 93-96.9 B 83-86.99 C 70-76.99 A- 90-92.99 B- 80-82.99 D 60-69.99 B+ 87-89.99 C+ 77-79.99 F < 60
CHNS1601 UNDERSTANDING CONTEMPORARY CHINA Essay (40%) Write an argumentative essay of 2,000 words in English on one of the following questions. Please make sure that you read the section ‘essay’ in the unit outline, the marking criteria and essay guidelines before you write and submit your essay. 1. China’s encounter with Western powers 2. Economic reform. 3. Environment protection 4. Gender inequality (in the countryside, in education, in government, at the workplace, or at home) 5. Housing 6. Hukou (household registration) 7. Political reform. 8. Population control 9. Rural reform. in the 1980s 10. Social stratification 11. Urbanization 12. Village elections
EPCD11006 Numerical Algorithms for High Performance Computing Released: Monday 24th March 2025 Due: Monday 31st March 2025 by 11:59am (GMT) In lectures we saw that an important way in which linear algebra libraries such as LAPACK and ScaLAPACK obtain good performance is by somehow dividing matrices up into blocks when executing algorithms such as LU factorisation. This coursework explores this topic further. You are asked to conduct a practical investigation that requires you to apply knowledge gained from lectures and practicals supplemented with limited consultation of external resources such as the official (Sca)LAPACK documentation. Marks are available for the correctness and quality of your code and scripts, the explanations you give in response to the questions, and the quality of the plots you are asked to make. Indications of marks achievable for each part below constitute a total for all aspects of your submission related to that part. Please ensure that you provide the following in your submission (using sensible file names throughout): • Your code, which should compile and run on Cirrus as outlined below • Your written answers to the questions below • The plots you are asked to generate • The raw data you used to generate these plots • Any job scripts you used to run experiments Written answers and plots should be given in one document, while code, data and job scripts should be provided in a separate zip or tar archive. In addition to course materials you may want to consult (and reference if they directly inform. your answer) the following external resources: • The User Guide for Netlib’s reference LAPACK and ScaLAPACK implementations, on which MKL was based: https://www. netlib. org/lapack/lug https://netlib. org/scalapack/slug • Source code documentation for Netlib LAPACK routines such as SGETRF, available from https://www. netlib. org/lapack/explore-html • For additional background, Chapter 10 of https://link. springer. com/chapter/10.1007/3-540-36574-5_10 (accessible through Springer Institutional login via the University of Edinburgh). 1. LAPACK You have been provided with a simple LU factorisation program in both C (lufact-lapack. c) and Fortran (lufact-lapack. f90), similar to the code you worked with in exercises in the course. You can choose to work in either of these two languages. The provided code generates a random matrix A and right-hand side b using the matgen function, then performs an LU factorisation using the LAPACK sgetrf function before solving the problem for b with the sgetrs function. The right-hand side b is chosen in matgen such that the solution x should be a column vector of ones: x = (1, . . . , 1)T . The code uses single precision throughout, and you will find that several matrix and vector helper functions are included. To compile the provided code on Cirrus, first load the Intel compilers and MKL modules: module load intel-20 .4/compilers module load intel-20 .4/cmkl Then to compile and link against LAPACK and BLAS provided by MKL, include the -mkl compiler option, i.e. for the two languages respectively: ifort -mkl -o lufact-lapack lufact-lapack. f90 icc -mkl -o lufact-lapack lufact-lapack . c (a) One of the main ways LAPACK achieves good serial performance on modern processors with a multilevel cache hierarchy (L1, L2, L3) is by dividing the full matrix A into blocks, i.e. submatrices, when executing an algorithm such as LU factorisation. How and why does the block-based implementation of an algorithm like LU factorisation in LAPACK typically yield higher serial performance compared to its implementation in LINPACK? Your answer should include which aspects of theoretical algorithmic performance matter for performance given real-life hardware bottlenecks, and why. [6 marks] (b) Block size can not be specified explicitly when calling LAPACK routines and instead is automatically decided internally based on heuristics encoded in the LAPACK routine ILAENV. Modify the provided code to interrogate ILAENV and determine the block size chosen by SGETRF when it is called by your program. Summarise your findings regarding how the block size changes going from matrix A of size 1x1 up to size 5000x5000. Why do you think the block size changes the way it does? [6 marks] (c) Insert calls to omp_get_wtime() before and after the call to SGETRF and use these to determine the time taken for LU factorisation (you will need to compile with OpenMP support). Run experiments on Cirrus compute nodes to determine how this time changes as the matrix size increases across the range described above. Remember to load the same modules as during compilation, follow standard practice for running serial jobs on Cirrus as described in the Cirrus user guide, and ensure you follow good benchmarking practice, including: • Request exclusive node access (#SBATCH --exclusive) and #SBATCH --cpus-per-task=1 in your job script(s) • Set export OMP_NUM_THREADS=1 to isolate results from any potential effect of multithreading within MKL • Use srun --cpu-bind=cores to launch your executable Plot the time taken for SGETRF against the linear dimension N of the matrix A. Can you discern any effect of the changes in block size? [6 marks] ScaLAPACK You are also provided with Fortran code (lufact-scalapack. f90) that uses ScaLAPACK to run SGETRF in parallel using MPI and a block-cyclic processor grid decomposition as shown in lectures. There should be no need to make any changes to the code itself and you do not need to be familiar with Fortran to complete this part of the coursework. The code contains several new routines and variables to allow the ScaLAPACK library to work but is otherwise basically the same as for the serial version. Small changes have been made to make the array sizes dynamic and there are a few small changes to the matgen routine. Before compiling, make sure the following modules are loaded: module load mpt module load intel-20 .4/compilers module load intel-20 .4/cmkl A Makefile is provided, so to compile simply type make. The executable takes the following input arguments: ./lufact matsize nprocrow nproccol blocksize where matsize sets the size of the global matrix A (i.e. gives A the dimensions matsize × matsize), and the processor grid is set to have nprocrow rows of blocks and nproccol columns of blocks. Finally blocksize sets the size measured in number of elements of the side of the blocks along both row and column dimensions, i.e. the blocks are square with dimensions blocksize × blocksize (apart from any blocks at the boundaries of the matrix that may be cut off if they do not fit exactly). You should take care to ensure that nprocrow × nproccol = nprocs (the total number of processors you run on). As before, the code is designed to give a solution of xi = 1. Rather than print out the entire x array, the code prints out the value of |x − 1|, where 1 is a vector with each element equal to 1. (d) Submit the code to Cirrus using a batch script that launches lufact on 16 processes and with a 4x4 processor grid. Remember to load the same modules as during compilation, follow standard practice for MPI-parallel jobs on Cirrus as described in the Cirrus user guide, and ensure you follow good benchmarking practice as outlined above. Try running the code with blocksize=6400 using different block sizes (eg 8, 16, 32, 64, 128, 256). What is the optimal block size for this particular problem? Can you give likely reasons for the trend in performance when using different block sizes? [7 marks]
ELEC6252W1 SEMESTER 2 EXAMINATIONS 2022 - 2023 FUTURE WIRELESS TECHNIQUES Section A Question A1. (a) In a cooperative system shown in Figure 1, when the source node S, relay node R and destination node D are all half-duplex nodes, the achievable spectral efficiency is (1) when amplify-and-forward cooperation is employed. In (1), γ is the signal-to-noise ratio (SNR) measured at the destination node D, E[·] represents the expectation with respect to the involved channels, and the factor 1/2 is because of the shortcoming that two time-slots are required to deliver one symbol from source node S to destination D. (i) Suggest an alternative cooperative system to improve the spectral efficiency of the system by avoiding the above-mentioned shortcoming; (ii) Describe in detail the operations of signal transmission in your suggested system. [5 marks] (b) Assume that two base-stations (BSs) can conduct cooperation based on data exchange only. State three types of BS cooperative processing that the BSs may operate. [5 marks] (c) Consider a Non-Orthogonal Multiple-Access (NOMA) downlink, where a BS broadcasts x1 and x2 , which satisfy E[xk(2)] = 1 for k = 1, 2, to users 1 and 2 using power P1 and P2 , respectively. At some time, the signals received respectively by users 1 and 2 can be expressed as (1) (2) where h1 and h2 represent the channel gains from the BS to users 1 and 2, respectively, and n1 and n2 are Gaussian noise distributed with zero mean and a variance of σ 2. • Assume that j h1 j 2 < j h2 j 2 , and correspondingly the power assigned by BS to users 1 and 2 satisfies P1 > P2. Derive the sum rate achieved by this NOMA downlink. • Describe the detection (decoding) procedures carried out, respectively, by users 1 and 2 for achieving the above sum rate. [5 marks] (d) Provide two application examples to show the benefit of employing full-duplex instead of half-duplex. You may use drawings to support your explanation. [5 marks] (e) Consider a multiple-input multiple-output (MIMO) system employing M transmit and N receive antennas. Draw and annotate the MIMO system model and write the received signal equation and explain the different terms used. [5 marks] (f) Explain the concept of massive MIMO and comment on the motivation for using massive MIMO from a channel capacity perspective and also from a transmission and detection perspective. [5 marks] (g) Discuss the concept of pilot contamination and antenna correlation and analyse their effect on the performance of massive MIMO. [5 marks] (h) Explain the reasons for using beamforming for communications at millimetre wave frequencies and also the reasons for the need to use hybrid beamforming for millimetre wave communications. [5 marks] Section B Question B1. (a) There is a sparse-spread code-division multiple-access (CDMA) system, which has the input-output relationships of (4) Draw the factor graph of this sparse-spread CDMA system for operating the message-passing algorithm, in order to detect the data symbols x1 , x2 , . . . , x8. [5 marks] (b) Fig. 2 shows a two-hop communication link, where d1 and d2 represent the distances, h1 and h2 represent the fast fading gains, and P1 and P2 represent the transmit power, of the first and second hops, respectively. Assume that signals transmitted over either hops experience propagation path-loss with a path-loss exponent Q, and noise added at relay R and destination D obeys the Gaussian distribution with zero mean and variance σ 2. Furthermore, relay R is assumed to be operated in half-duplex mode, and it also has no buffer for storing the data received from node S. Assuming that the decode-and-forward (DF) relaying scheme is employed by relay R, derive an expression for the spectral-efficiency achieved by this two-hop link. [5 marks] (c) Fig. 3 illustrates a network having two pairs of distributed nodes, (S1 , D1 ) and (S2 , D2 ), where two destination nodes D1 and D2 are close to each other. In this network, node S1 needs to send a symbol x1 to D1 , while node S2 needs to send a symbol x2 to D2. Assume that nodes S1 and S2 can cooperate with each other by exchanging their data to be sent to their destinations, respectively, and that the channels h11 , h12 are only known to D1 , while the channels h21 , h22 are only known to D2. (i) Based on Alamouti’s space-time code, design a cooperative transmission scheme for S1 and S2 to send x1 and x2 , respectively, to D1 and D2. Explain in detail the transmission steps. [4 marks] (ii) Assuming the maximal ratio combining (MRC) assisted decoding scheme, derive the expressions for the decision variable obtained by D1 or D2. [4 marks] (d) Figure 4 shows a three-hop communication link for node S to send information to node D with the help of two relay nodes R1 and R2. As shown in the figure, signals sent by node S can be received by relay R1 with the signal-to-noise ratio (SNR) of 01 and by relay R2 with the SNR of 02 ; signals sent by relay R1 can be received by relay R2 with the SNR of 12, and by node D with the SNR of 13 ; signals sent by relay R2 can be received by node D with the SNR of 23. Assume that all nodes are operated in half-duplex, and that both the relay nodes R1 and R2 use the amplify-and-forward (AF) relaying protocol. Furthermore, assume that relay node R2 uses the maximal ratio combining (MRC) scheme to combine the signals received from nodes S and R1, and that node D also uses the MRC scheme to combine the signals received from nodes R1 and R2. Based on the above settings and assumptions, (i) provide a formula for the SNR achieved by node D for detecting a symbol sent by node S; [4 marks] (ii) provide a formula for the spectral-efficiency achieved by this three-hop communication link. [2 marks] (e) In MultiCell Cooperation/Processing (MCCP), two Base-Stations (BSs) may cooperate based on exchanging both Channel State Information (CSI) and Data (CSID-MCCP mode), exchanging CSI only (CSI-MCCP mode) or exchanging data only (D-MCCP mode). For each of the three MCCP modes, provide an example to explain the principle of the corresponding BS cooperative processing. [6 marks] Question B2. (a) Fig. 5 is a cooperative network, which uses a direct-link (S → D) and a relay-link (S → R → D) to send information from source node S to destination node D. The distance from node S to node R is d1 , that from node R to node D is d2, and that from node S to node D is d. Transmited signals experience both the propagation pathloss with a pathloss exponent of α, and the small-scale fading with the fading gains shown in the figure. Assume that the transmit power of node S is P1 and that of relay R is P2 , and all nodes are operated in half-duplex mode. Noise power is σ 2. Furthermore, assume that hSR is known to node R, and hD , hRD are known to node D. Based on the above settings/assumptions and assuming amplify-and-forward (AF) relaying at node R, derive an expression for the signal-to-noise ratio (SNR) achieved by node D for detecting the symbol x sent by node S. [6 marks] (b) Consider a non-orthogonal multiple-access (NOMA) system, where two users send their information to a base-station (BS). The signal received by the BS can be expressed in the form of where x1 and x2 are the information sent respectively by users 1 and 2, which satisfy E[xk(2)] = 1 for k = 1, 2, P1 and P2 represent the transmit power of users 1 and 2, while h1 and h2 represent the channel gains, respectively, from users 1 and 2 to the BS. Finally, n is Gaussian noise distributed with zero mean and a variance of N0. (i) Assume that j h1 j 2 P1 ≥ j h2 j 2 P2 , derive the sum rate achieved by users 1 and 2. (ii) Describe the BS’s detection (decoding) procedure for achieving the above sum rate. [6 marks] (c) Assume that a single-antenna BS broadcasts x1 , x2 , . . . , xK , satisfying E[x 2 k ] = 1, via Gaussian channels to users 1, 2, . . . , K using power P1 , P2, . . . , PK , respectively. The channel gains from the BS to users 1, 2, . . . , K are h1 , h2, . . . , hK , respectively. (i) Assume that the transmit power of the BS satisfies P1 ≥ P2 ≥ . . . ≥ PK , which models that user 1 is the user furthest from BS, then user 2, and finally, user K is the one closest to BS. Describe the optimum detection scheme of user k , k = 1, 2, . . . , K, to achieve the sum rate of the NOMA system. [5 marks] (ii) In addition to the assumption in (c)(i), further assume that the noise variance is N0. Derive an expression for the sum rate achieved by the K users. [3 marks] (d) The biggest challenge to implement full-duplex in practice is the self-interference cancellation (SIC), which may be implemented in propagation domain, analog-circuit domain and digital domain. (i) State two SIC techniques operated in the propagation domain, and discuss respectively their operational principles, advantages and the challenges they may face in practice. [5 marks] (ii) State two SIC techniques operated in the analog-circuit domain, and discuss their operational principles, advantages and the challenges they may face in practice. [5 marks] Section C Question C1. (a) Consider a single-user millimetre wave (mmWave) multiple input multiple output (MIMO) system that employs hybrid analog-digital beamforming, where the transmitter is equipped with Nt antennas and the receiver with Nr antennas. The transmitter is assumed to have NR(t)F radio frequency (RF) chains, while the receiver employs NR(r)F RF chains, where the number of RF chains is assumed to satisfy (NR(t)F ≤ Nt ) and (NR(r)F ≤ Nr ). The transmitter and receiver communicate via Ns data streams, where Ns ≤ min(NR(t)F , NR(r)F ). Draw the block diagrams of the sub-array connected hybrid beamforming architectures and briefly explain the processing stages. [10 marks] (b) Consider a multiple-input multiple-output (MIMO) system, where a base station (BS) equipped with NT = 4 antennas is communicating with a user equipment having NR = 4 antennas. The BS has NRF = 2 radio frequency (RF) chains. (i) Design a transmission scheme that would result in throughput of 6 bits per channel use. You should decide on the modulation scheme used and the processing carried out at the transmitter. (ii) Write the mathematical representation of the transmitted signal and the received signal, highlighting the dimensions of any vectors or matrices used. (iii) Design a detection scheme to decode your received signal. [20 marks] Question C2. (a) (i) Explain the concept of preprocessing aided spatial modulation. (ii) Write the mathematical representation of the transmitted signal and the received signal, highlighting the dimensions of any vectors or matrices used. (iii) Explain how the signal can be detected at the receiver side. [14 marks] (b) Consider a single-user millimetre wave (mmWave) multiple input multiple output (MIMO) system that employs hybrid analog-digital beamforming, where the transmitter is equipped with Nt antennas and the receiver with Nr antennas. The transmitter is assumed to have NR(t)F radio frequency (RF) chains, while the receiver employs NR(r)F RF chains, where the number of RF chains is assumed to satisfy (NR(t)F ≤ Nt ) and (NR(r)F ≤ Nr ). The transmitter and receiver communicate via Ns data streams, where Ns ≤ min(NR(t)F , NR(r)F ). (i) The mmWave channel matrix H (t) of size CNr ×Nt at time instant t is given by: Explain your understanding of the mmWave channel model and what the equation above represents. (ii) Draw the block diagrams of the fully-connected hybrid beamforming architectures and briefly explain the processing stages. [16 marks]
BMED 4501 Biophotonics (Semester 2 – Year 2024 – 2025) Homework 2 (Full mark: 90) (Due: May, 21st) Retinal Imaging and OCT Angiography in Eye-Disease Diagnosis Retinal imaging plays a central role in modern ophthalmology, allowing detailed visualization of the fundus, or the back part of the eye, for diagnosis and monitoring of retinal diseases (see Fig. 1(A) for an overview of ocular anatomy). One of the most common and vision-threatening retinal pathologies is age-related macular degeneration (AMD). AMD exhibits both structural and vascular changes in the macula, which can be effectively assessed using various retinal imaging modalities. Common symptoms of AMD include blurred central vision and metamorphopsia (i.e. distorted version). Fig. 1 (A) Anatomy of human eye. (B-C) Images taken by fundus camera: (B) healthy retina (C) age-related macular degeneration. RPE: retinal pigment epithelium The current gold standard for general retinal imaging is fundus photography, which provides a wide-field, magnified view of the retina, optic disc, choroid, and retinal blood vessels. It is widely used for screening of retinal pathologies. Figures 1(B)–(C) show representative fundus images of both healthy and AMD-affected eyes, highlighting key clinical features such as drusen, pigmentary changes, and retinal pigment epithelium (RPE) irregularities. However, fundus photography provides only en-face 2D structural information and lacks the ability to image depth or blood flow. To address these limitations, more advanced imaging techniques have been adopted: (1) Fluorescence angiography (FA) – Two extrinsic contrast agents (fluorescence dyes) have been adopted: sodium fluorescein and indocyanine green (ICG). Both fluorescein an ICG angiography involve dye injection into the systemic circulation (into veins of the forearm) prior to retinal imaging. Few minutes after injection, the retinal and choriodal vasculatures, which are filled with the dye, can be visualized by fluorescence imaging (Fig. 2). FIG. 2. (A) ICG angiogram and (B) Fluorescein angiogram of the same field-of-view across the fundus. The vasculature is clearly visualized in both images. Note that the central bright region in (A) reveals choroidal neovascularization (CNV), that is absent in (B), i.e. the creation of new blood vessels in the choroid layer – an indication of AMD. (2) Fundus autofluorescence (FAF) – A noninvasive imaging modality that captures the natural autofluorescence of lipofuscin in the RPE. It is a pigment by-product observed in retinal pigment epithelial (RPE) cells. Normally, lipofuscin is constantly being produced by the RPE and choriocapillaris (capillaries near choroid). However, aging process and/or various retinal conditions could lead to lipofuscin accumulation. Excessive amount of lipofuscin interferes with normal cell function and thus results in cell death. Therefore, lipofuscin is a key component related to RPE metabolism (health condition). Distinct patterns of fundus autofluorescence (or absence of autofluorescence) correlating with RPE death are most strongly seen in age-related macular degeneration (AMD) (Fig. 3). Fig. 3 Fundus autofluorescence images. (A) Healthy fundus. (B) Fundus with AMD. Note that banded pattern of increased autofluorescence in the junction. While typical OCT—which you have already studied in the class —provides high-resolution cross-sectional structural images of retinal layers, it cannot directly visualize blood flow or vascular abnormalities such as choroidal neovascularization (CNV). To overcome this limitation, Optical Coherence Tomography Angiography (OCTA) has emerged as a noninvasive, dye-free imaging technique that enables depth-resolved visualization of retinal and choroidal vasculature (Fig. 4). In brief, OCTA achieves this by detecting motion contrast from moving erythrocytes (red blood cells) across repeated OCT B-scans at the same location (Fig. 5). This allows the construction of high-resolution vascular maps without the need for intravenous dyes. Fig. 4. An example of an active CNV lesion in retina imaged with OCT (left) and OCTA (right). The superficial (ILM to IPL) and deep retinal plexuses are depicted, along with the outer retina (OPL-BRM), which is normally avascular, and the choriocapillaris. ILM: internal limiting membrane; IPL: Inner Plexiform Layer; OPL: Inner Plexiform Layer; BRM: Bruch’s membrane (See Fig. 1 for detailed anatomy of the retina). Fig. 5. Generic workflow of OCTA image construction. In this homework, you will explore the principles of OCTA, compare it with other imaging modalities, and apply your knowledge of OCT to understand its role in AMD diagnosis. You will also evaluate the advantages and limitations of each modality in visualizing CNV and understand why structural OCT alone is insufficient for vascular imaging. Questions 1. (12%) Apart from wide-field fundus photography, confocal scanning laser imaging approach has also been adopted in retinal imaging, termed as confocal scanning laser ophthalmoscopy (cSLO) (Fig. 6). It is capable of producing high-contrast retinal images by raster scanning a laser spot on fundus and detecting fluorescence emission(signal) through a confocal pinhole. Fig. 6 General concept of confocal scanning laser ophthalmoscopy (cSLO). BS: beam-splitter; DM: Dichroic mirror; FOV: field-of-view. Caution: the beam profile shown in the image is only for illustration, should NOT be treated as rigorous reference for calculation in the questions. A simplified design schematic of a cSLO system is shown in Fig. 7 - designed for Fluorescence angiography (FA) as well as Fundus autofluorescence (FAF) imaging. The generalized design includes separate optical pathways for illumination and collection through a common telescope (formed by 2 lenses L1 and L2). The two paths are separated by a beam-splitter. The telescope is configured in a way such that the subject’s pupil plane L3 and the scanning mirror plane M1 form the pair of conjugate planes. For the sake of simplicity, only one mirror is shown in Fig. 7. (In practice, two mirrors are needed for 2D scanning). Fig. 7 Simplified system configuration of cSLO. Note that the drawing is not to scale. The mirror scanner M1 has a maximum steering angular range of θmirror=10o bounded by the blue and green lines, as shown in Fig. 7. Based on the system configuration shown in Fig. 7 and the ray-tracing technique, sketch how the two scanning beam paths (along the blue and green lines) are projected (focused) onto the retina. 2. (6%) Figure 8 shows the excitation and emission spectra of lipofuscin, fluorescein, and ICG. Hence, what are the emission colors of FAF images, fluorescein angiograms and ICG angiograms? Fig. 8 Fluorescence excitation (dashed curves) and emission (solid curves) of lipofuscin, fluorescein, and ICG. 3. (6%) In order to perform. multimodal retinal imaging, i.e. FAF imaging, fluorescein angiography and ICG angiography are performed in the same platform, multiple lasers are required to excite different fluorophores. In view of cost effectiveness, the number of lasers used in the system should be kept at minimum. Based on this criterion and Fig. 8, choose the proper lasers (from Table 1) to be used in this multimodal retinal imaging system. 4. (10%) Draw a system schematic explaining how such a multimodal retinal cSLO system works (Hints: How many laser sources/photodetectors do we need? What are the specifications of the spectral filters used in the system so that it can perform. multicolour imaging of FAF, fluorescein angiography and ICG?) 5. (12%) It has been argued that ICG angiography is more appropriate than fluorescein angiography for imaging of the choroidal circulation below RPE. Based on what you have learnt from Tissue Optics, explain if you support this argument. (Hint: RPE consists of high density of pigment melanin; what are excitation/emission wavelengths of ICG/fluorescein?). 6. (8%) Let’s consider a spectral-domain OCT system, which is to be integrated with the cSLO system we studied in Questions 1-4. If it is required to achieve an axial resolution no worse than 7 μm, choose the best light source from the list shown in Table 2 for this OCT system. Explain your choice. (*Assuming all four sources have the Gaussian spectral shape) 7. (8%) If it is required to achieve real-time 3D imaging (512(x) × 512(y) × 1024(z) voxels) at a speed of 1 frame per second (fps), i.e. 512 x 512 A-scans in 1 second, choose the best line-can camera from the list shown in Table 3 and Fig. 9. Note that you should also make your choice based on the source you choose in question 6. Explain your choice. (**x and y are along the transverse directions whereas z is along the axial direction.) Figure 9: (left) Typical configuration of a line-scan camera. (right) Spectral responses of four different line-scan cameras for OCT. 8. (10%) To visualize blood flow in retinal and choroidal vasculatures, OCTA is needed. According to Fig. 5, OCTA detects the changes in the OCT intensity signal between repeated B-scans (up to 4 B-scans) acquired at the same location (pixel). One simple approach of evaluating such changes is to compute the variance of the intensity of an image pixel over N repeated B-scans: Where Ii is the intensity of the pixel in the ith B-scan. is the average intensity of the pixel over N repeated B-scans. Let’s take the following simple example which displays the intensity values of the 6 pixels along an A-scan direction (Pixel 1 to 6) captured in 4 repeated consecutive B-scans. Calculate the variances of all 6 pixels over N=4 B-scans (*this array of 6 pixels is essentially the OCTA signal along the A-scan direction). 9. (8%) Explain why the approach of detecting the OCT intensity change (e.g., by calculating the variance of intensity in Question 8) can yield a map of blood flow in vasculatures? (hints: what do we expect, on the other hand, the variance value from the static tissue region?) 10. (10%) Choroidal neovascularization (CNV) is a hallmark of neovascular age-related macular degeneration (AMD) and is classified into three main subtypes based on the anatomical location. Understanding these subtypes is essential for selecting the appropriate imaging modality and guiding treatment decisions. • Type 1 CNV arises from the choroid and grows beneath the retinal pigment epithelium (RPE), often presenting as pigment epithelial detachments (PEDs). • Type 2 CNV extends above the RPE into the subretinal space and is usually associated with prominent leakage on FA, corresponding to "classic" CNV. • Type 3 CNV, also known as retinal angiomatous proliferation (RAP), originates within the retina and can progress to form. retinal-choroidal anastomoses, often requiring multimodal imaging for accurate detection. A 75-year-old patient presents with blurred central vision and metamorphopsia in one eye. Fundus examination reveals drusen and subtle pigmentary changes. An ophthalmologist suspects choroidal neovascularization (CNV) secondary to AMD. She has access to the following imaging modalities: • Fluorescein angiography (FA) • Indocyanine green angiography (ICGA) • Spectral-domain optical coherence tomography (OCT) • Optical coherence tomography angiography (OCTA) Which of the following statements correctly match the CNV subtype with the most appropriate or helpful imaging modality for its detection and characterization? Select all that apply. A. Type 1 CNV (occult, beneath the RPE) is best detected using indocyanine green angiography (ICGA) or OCTA due to its choroidal origin and lack of dye leakage on FA. B. Type 2 CNV (classic, above the RPE) is readily identified using fluorescein angiography (FA), which shows early leakage and well-defined lesion boundaries. C. Type 3 CNV (retinal angiomatous proliferation) is best visualized using OCT alone, as other angiographic modalities provide limited additional information. D. OCTA is capable of detecting both Type 1 and Type 3 CNV by identifying flow patterns in the outer retina and choriocapillaris layers, even in the absence of leakage. E. Fluorescein angiography is the most reliable modality for detecting Type 1 CNV due to its ability to show occult leakage from sub-RPE vessels. (Hints: think about the tissue properties of RPE; Light-tissue interaction in these layers of retina, choroidal layers) **A challenge to think (not counted as the Homework 2,s grade): How to integrate OCT, OCTA, FAF, fluorescein and ICG angiography in a unified system. (hint: You could think of a schematic diagram design of the entire system to explain the basic operation principles)
ANTH0003: Introduction to Social Anthropology Book Review Guidelines 1. You are required to write a 1,500-word book review essay on an ethnographic monograph in the field of social anthropology. We have provided a list of titles which can be found under the Assessment tab on Moodle. N.b. A monograph is simply an ethnographic study of a particular society, culture, or related subject matter (although some may be more progressive and experimental in terms of focus), in the form. of a single book, and usually (but not always) by one author. Theoretical books, textbooks, or edited volumes by multiple authors do not qualify for the task. 2. This may seem obvious, but choose a book that you find interesting! Perhaps it relates to your own intellectual interests or life experience, or maybe just appeals to you in a more general sense. This does not mean you need to agree with it on all fronts. On the contrary, if you have some disagreements with the author or their arguments, it will make it easier for you to engage critically with the book. 3. Read the whole book, from cover to cover! Spend ample time with it, reading it, thinking about it, and ruminating on it … 4. In essence, an academic book review should be a quality piece of writing that engages thoughtfully and critically with a selected monograph. A good book review will do a number of related things: - Provide a critically-engaged synopsis of the main subject matter, key arguments, structure, and order of exposition of the book; - Situate the material presented in the book (e.g., ethnography, theory, methods) relative to some of the existing literature on these subjects, themes, or people; - Critically evaluate the methodology, ethnography, and analyses presented therein; - Critically evaluate the contributions made by the author to broader anthropological themes and debates – e.g., what gaps does it fill; how does it advance anthropological understandings; does it succeed in its aims? 5. Be creative! There is no one rigid template for how to write a review. The book review exercise provides you with an excellent opportunity to be original and imaginative in how you present your reading of and engagement with your chosen text. For instance, you could bring in some personal reflections or experiences which have framed your appraisal of the book. Alternatively, you could link the book to the news or current affairs, to emphasise the text’s relevance or impact outside of academia. 6. I would advise you to read some academic book reviews to get an idea of the style and form. expected from the task. For a guide as to how professional anthropologists write book reviews, you could consult the review section of any major journal such as Journal of the Royal Anthropological Institute, Current Anthropology, or American Ethnologist. For further reading, some of the most readable and stylised book reviews can be found in the London Review of Books and the New York Review of Books. 7. You are expected to follow the writing, referencing, and formatting guidelines provided by the UCL Anthropology Department, which can be found in the UG Student Handbook (p.36-37) and on the ANTH: Academic Skills and Anthropology Student Hub Moodle pages. 8. General guidance for academic writing and style can be found in the ‘Writing Resources ’ document in the Assessment tab on the course Moodle site. 9. The deadline is 1pm on 15 February (Thursday of the mid-term Reading Week). Make sure to submit your book review in good time before the deadline. Good luck! Finally, here are some examples of reviews of relevant books by prominent scholars which you might use as a guide to how professional anthropologists tend to approach reviewing books. Bear in mind, we don’t expect you to write like Bruno Latour (please don’t!) – but these are all interesting reviews nonetheless: 1. Bruno Latour’s review of Anna Tsing’s The Mushroom at the End of the World (2015). 2. Thom van Dooren’s review of Donna Haraway’s Staying With the Trouble (2016). 3. Chris Tilley’s review of Tim Ingold’s Lines: A Brief History (2007).
ELEC6252W1 SEMESTER 2 EXAMINATIONS 2023 - 2024 FUTURE WIRELESS TECHNIQUES Section A Question A1. (a) Assume an extended Wyner’s system model, as shown in Fig. 1, where adjacent base-stations (BSs) conduct cooperation under ideal data exchange. FIGURE 1: Extended Wyner’s system model for multicell SDMA systems. (i) Assume optimum and minimum mean-square error (MMSE) multiuser detection (MUD). For each of these two MUDs, suggest a multicell cooperation/processing (MCCP) scheme and describe in detail its operations. (ii) Analyse the pros and cons of the proposed MCCP schemes. [5 marks] (b) Fig. 2 illustrates a network having two pairs of distributed nodes, (S1 , D1 ) and (S2 , D2 ). In this network, node S1 needs to send a symbol x1 to D1 , while node S2 needs to send a symbol x2 to D2. Assume that S1 and S2 can exchange their data symbols. Propose a cooperative transmission scheme for S1 and S2 to send x1 and x2 , respectively, to D1 and D2. State in detail the transmission steps, and explain the benefit obtained from the proposed cooperation scheme. [5 marks] FIGURE 2: (c) There is a sparse-spread code-division multiple-access (CDMA) system, which has the input-output relationships of (1) Draw the factor graph of this sparse-spread CDMA system for operating the message-passing algorithm, in order to detect the data symbols x1 , x2 , . . . , x8. [5 marks] (d) Provide two application examples to explain the benefits and challenges of employing full-duplex instead of half-duplex. [5 marks] (e) Consider a multiple-input multiple-output (MIMO) system employing M transmit and N receive antennas. Draw and annotate the MIMO system model and write the received signal equation and explain the different terms used. [5 marks] (f) Explain the benefits and challenges of using multiple-input multiple-output (MIMO) systems. Also, explicitly highlight the different MIMO gains. [5 marks] (g) Figure 3 shows the oxygen, water vapour and rain attenuation versus frequency. Explain your observations on the figure and describe how this affects the transceiver design in millimeter wave frequencies. FIGURE 3: Attenuation curves of O2 , H2 O and rain at sea level. The term ρ refers to the density of H2 O in grams per meter3 . [5 marks] (h) Explain the Vertical Bell Labs Space Time (V-BLAST) multiple-input multiple-output (MIMO) transmission process and elaborate with mathematical equations the transmission model, and then describe one V-BLAST detection technique using mathematical equations. Additionally, explain the main characteristics of your chosen detection technique as compared to the other detection methods. [5 marks] Section B Question B1. (a) Explain the operational principles of the amplify-and-forward (AF), decode-and-forward (DF) and compress-and-forward (CF) relaying protocols. [3 marks] (b) Consider that nodes S1 (having data x1) and S2 (having data x2) use a two-way relaying network, as shown in Fig. 4, to exchange x1 and x2 between S1 and S2. FIGURE 4: A two-way relaying network. Assume that all nodes are operated in half-duplex mode. Based on the network coding principles, design a two-way relaying scheme for S1 and S2 to exchange x1 and x2. Explain the operations in details. [6 marks] (c) In MultiCell Cooperation/Processing (MCCP), two Base-Stations (BSs) may cooperate based on exchanging both Channel State Information (CSI) and Data (CSID-MCCP mode), exchanging CSI only (CSI-MCCP mode) or exchanging data only (D-MCCP mode). For each of the three MCCP modes, provide an example to explain the principle of the corresponding BS cooperative processing. [6 marks] (d) Fig. 5 is a cooperative network, which uses a direct-link (S → D) and a relay-link (S → R → D) to send information from source node S to destination node D. The distance from node S to node R is d1 , that from node R to node D is d2, and that from node S to node D is d. Transmitted signals experience both the propagation pathloss with a pathloss exponent of α, and the small-scale fading with the fading gains FIGURE 5: A cooperative network with direct transmission. shown in the figure. Assume that the transmit power of node S is P1 and that of relay R is P2 , and all nodes are operated in half-duplex mode. Noise power is σ 2. Furthermore, assume that hSR is known to node R, and hD , hRD are known to node D. (i) By assuming the amplify-and-forward (AF) relaying at node R, derive the spectral-efficiency achieved by the cooperative network. [4 marks] (ii) By assuming the decode-and-forward (DF) relaying at node R, derive the spectral-efficiency achieved by the cooperative network. [3 marks] (e) Figure 6 represents a two-hop communication network, where node S (having one antenna) sends data to node D (having one antenna) with the aid of a relay node R, which employs L antennas for receiving and transmission. Assume that all nodes are operated in half-duplex mode, the transmit power of node S is P1 , the transmit power of the relay is P2, the distance between node S and the relay is d1 , and the distance between the relay and node D is d2. Assume that signals transmitted by node S and the relay experience the propagation path-loss with a path-loss exponent α, and the small-scale fading with the fading gains as shown in the figure. Furthermore, assume that the channel knowledge, i.e., {hij }, is only employed by the relay. FIGURE 6 (i) Consider a relay processing scheme in the principles of either AF or DF, describe in detail the operations carried out by the relay. [4 marks] (ii) Under the relay processing scheme considered in (e)(i), derive an expression for the spectral-efficiency achieved by the two-hop communication network. [4 marks] Question B2. (a) Assume a downlink multicarrier system, where the base-station (BS) of a cell uses M subcarriers to support K = 2M users randomly distributed in the cell. Based on the principle of non-orthogonal multiple-access (NOMA), design a transmission scheme for the BS to simultaneously transmit information to the 2M users. [5 marks] (b) Consider a NOMA downlink, where a BS broadcasts x1 and x2 , which satisfy E[xk(2)] = 1 for k = 1, 2, to users 1 and 2 using power P1 and P2 , respectively. At some time, the signals received respectively by users 1 and 2 can be expressed as (1) (2) where h1 and h2 represent the channel gains from the BS to users 1 and 2, respectively, and n1 and n2 are Gaussian noise distributed with zero mean and a variance of σ 2. (i) Assume that j h1 j 2 < j h2 j 2 , and correspondingly the power assigned by BS to users 1 and 2 satisfies P1 > P2. Derive the sum rate achieved by this NOMA downlink. (ii) Describe the detection (decoding) procedures carried out, respectively, by users 1 and 2 for achieving the above sum rate. [5 marks] (c) Assume that users 1, 2, . . . , K simultaneously send x1 , x2 , . . . , xK , satisfying E[x 2 k ] = 1, to a BS (with one antenna) using power P1 , P2 , . . . , PK via Gaussian channels. The channel gains from users 1, 2, . . . , K to the BS are given by h1 , h2, . . . , hK , respectively. (i) Assuming that j h1 j 2 P1 ≥ j h2 j 2 P2 ≥ . . . ≥ jhK j 2 PK , describe the optimum detection scheme of the BS to achieve the sum rate of the NOMA system. [4 marks] (ii) In addition to the assumption in (c)(i), further assume that the noise variance is σ 2. Derive an expression for the sum rate achieved by the K users. [4 marks] (d) Frequency-division duplex (FDD) and time-division duplex (TDD) are two well-known half-duplex schemes implemented in practical mobile communications systems. Describe the operational principles of FDD and TDD. Aid your description using illustrations whenever needed. [3 marks] (e) The biggest challenge to implement full-duplex in practice is the self-interference cancellation (SIC), which may be implemented in propagation domain, analog-circuit domain and digital domain. (i) State two SIC techniques operated in the propagation domain, and discuss respectively their operational principles, advantages and the challenges they may face in practice. [6 marks] (ii) State one SIC technique operated in the analog-circuit domain, and discuss its operational principles, advantages and the challenges it may face in practice. [3 marks] Section C Question C1. (a) Consider a multiple-input multiple-output (MIMO) system employing M transmit and N receive antennas. Let x = [x1 , x2 , · · · , xm] denote the signal transmitted from the M antennas and y = [y1 , y2 , · · · , yN ] denote the received signal vector. H represents the channel matrix between the transmitter and receiver of size N × M. When the receiver employs perfect channel knowledge, while the transmitter only knows the MIMO channel’s distribution and the transmitted signal vector x is independent of the channel matrix H, and when considering the case that M is fixed and N → ∞, then the ergodic MIMO capacity can be evaluated as: (4) Explain your understanding of the concept of capacity and elaborate on your observations on the derived capacity in Equation (4). [7 marks] (b) Consider a single-user millimetre wave (mmWave) multiple input multiple output (MIMO) system that employs hybrid analog-digital beamforming, where the transmitter is equipped with Nt antennas and the receiver with Nr antennas. The transmitter is assumed to have NR(t)F radio frequency (RF) chains, while the receiver employs NR(r)F RF chains, where the number of RF chains is assumed to satisfy (NR(t)F ≤ Nt ) and (NR(r)F ≤ Nr ). The transmitter and receiver communicate via Ns data streams, where Ns ≤ min(NR(t)F , NR(r)F ). Draw the block diagrams of the Fully-connected hybrid beamforming architectures and briefly explain the processing stages. [8 marks] (c) Consider a multiple-input multiple-output (MIMO) system, where a base station (BS) equipped with NT = 4 antennas is communicating with a user equipment having NR = 4 antennas. The BS has NRF = 2 radio frequency (RF) chains. (i) Design a transmission scheme that would result in throughput of 9 bits per channel use. You should decide on the modulation scheme used and the processing carried out at the transmitter. [7 marks] (ii) Write the mathematical representation of the transmitted signal and the received signal, highlighting the dimensions of any vectors or matrices used. [5 marks] (iii) Design a detection scheme to decode your received signal. [3 marks] Question C2. (a) Consider a multiple-input multiple-output (MIMO) system, where a base station (BS) equipped with N = 8 antennas is communicating with one user equipment having 2 antennas. Also, consider a scenario where QPSK is used as the modulation scheme. (i) Design a transmission scheme using the above system configurations for attaining a rate of 4 bits per channel use, while also attaining a diversity order of 2 using Alamouti’s space-time block code. Explain in details your transmission scheme and write the mathematical representation of the transmitted signal as well as the received signal. [8 marks] (ii) Describe and explain in details with mathematical equations one detection technique that can be employed to detect the received signal. [6 marks] (b) Consider a single-user millimetre wave (mmWave) multiple input multiple output (MIMO) system that employs hybrid analog-digital beamforming, where the transmitter is equipped with Nt antennas and the receiver with Nr antennas. The transmitter is assumed to have NR(t)F radio frequency (RF) chains, while the receiver employs NR(r)F RF chains, where the number of RF chains is assumed to satisfy (NR(t)F ≤ Nt ) and (NR(r)F ≤ Nr ). The transmitter and receiver communicate via Ns data streams, where Ns ≤ min(NR(t)F , NR(r)F ). (i) The mmWave channel matrix H (t) of size CNr ×Nt at time instant t is given by: Explain your understanding of the mmWave channel model and what the equation above represents. [8 marks] (ii) Draw the block diagrams of the sub-array connected hybrid beamforming architectures and briefly explain the processing stages. [8 marks]
22LLP 207 Research Methods Practical 1 SPSS 1 Basics - coding exercise S Panayi April 2025 Starting SPSS I’ve prepared this lesson using SPSS version 23, the one available to use at Loughborough. Let us start by learning SPSS by using SPSS. Find SPSS icon on your desktop or start menu . Now that you have found the SPSS icon on your desktop, double-click it. If the “IBM SPSS Statistics” dialog window appears (see figure 1), tell SPSS you want to create “New Dataset” and click OK. You can supress the appearance of this window by clicking the box “Don’t show this dialogue in the future.” Figure 1: The “IBM SPSS Statistics” dialog window The data editor window You should now see the SPSS Data Editor window (see figure 2). This is where you enter the data you wish to analyse. Each column represents a variable and each row represents an individual-respondent or observation (also called a case). Figure 2: The data editor window The Variable view: Coding the data Click the little tab (lower left of window) that says “variable view,” see figure 2. In the variable view window the rows are now the variables (see figure 3). Figure 3: The variable view window Now we are ready to start coding our data and decide on their format. In the first column of the variable view window coding sheet (Column: Name) enter your variables’ name. In the “Type” Column you can select your variables’ type (remember lecture 2). As you can see in figure 4 there are eight types of variables that can be coded in SPSS: 1) Numeric, 2) Comma, 3) Scientific notation, 4) Date, 5) Dollar, 6) Custom currency, 7) String, 8) Restricted Numeric. Most of the times you will be using numeric variables (i.e., numbers-e.g., 7, 0, 120). Figure 4: Variable type dialogue window Let’s start naming the variable. The name of each variable should contain no spaces, dots, or commas. It is always advisable to encode each respondent with a particular subject’s identification number. How about naming “respondent” your first variable then? OK so it’s not creative, but at least we all know what it means, and when it comes to coding, interpretability is important! Type “respondent” in the first empty box under the “Name” column, and select numeric from the options available under the column labelled “Type.” You should now have a row of entries in the sheet which identifies the default coding information for the participant identification number. We don’t need to change any of the other options since the participant code is a simple numeric value representing little more than order of entry of the participant data. Figure 5: Coding respondents The subject’s identifier was straight forward as the data does not represent categorical information. Now let’s make an entry for one more variable. For example, what if your next variable was respondent’s gender? When numbers are used to represent categories of events such as gender it is useful to be able to associate meaningful labels with those categories. SPSS allows us to do that in the variable view mode specifying values of categorical data. Let’s use a 0 to represent males and a 1 to represent females. This may seem tedious but it helps us during data entry. The more variables you have to enter the more you will appreciate a coding sheet. For example, to code participants’ gender, start as you did for the participant ID number by clicking on the box at the head of the column you wish to label. This will bring up the variable view window. Type variable name “gender” in the first column, and then move your cursor along to the column labelled values and click on that box associated with the gender variable (shown in figure 6 below). This brings up the dialog box shown in figure 6. Now it is simply a matter of entering a value, and an associated label. If we decided to code males with a 0 and females with a 1, we enter that information into the value labels box, as shown below pressing the “Add” button after each value, once the value label pair has been entered then press OK. Figure 6: Defining category labels Now return to the Data View using the tab at the foot (left) of the page, and then enter some subjects’ data. For example imagine that you had a total of ten respondents, the first three were males and the rest females. The data view window would look like figure 7. Figure 7: The final data editor window Saving the data After coding all your variables and entering your data, to the SPSS, you are ready to save the file on the hard drive of the computer on which you are working or the flash drive you brought to class with you. Click FILE on the command bar at the top of the window. From the drop-down menu that appears, select SAVE AS. The save as dialog window appears (figure 8). Figure 8: The save as dialog window Navigate your way to the location where you wish to save the file. In the FILE NAME box, enter “coding exercise.” Save as type should read SPSS Statistics (*.sav) – if it does not, change it to that from the drop-down menu there. Click Save and the data file is saved to your medium. Having completed the first part of this lesson, you can now close SPSS – just click the X in the upper right hand corner. Exercise on data coding and data entry In the following pages some illustrative closed questions from a highly structure questionnaire are provided. a. Have a go and code them in SPSS b. After coding all the questions enter your own responses to the SPSS data sheet. 1) Which types of pre-purchase information sources do you normally use during new car purchase (in this question you can choose more than one options)? Friends/relatives/acquaintances Brochures/pamphlets Showrooms/car salesmen ˜ Car magazines/newspapers Car TV shows ˜ Internet Use of personal knowledge on cars ˜ Other source, please specify…………………… 2) Please allocate 100 points across the following eight composite car characteristics, so as to reflect the relative importance you place on each of them during new car purchase (allocate more points to more important composite characteristics). a. Cost related characteristics ……¼… b. Technical characteristics ………… c. Performance characteristics ………… d. Image related characteristics ………… e. Quality related characteristics ………… f. Interior characteristics ………… g. Driving related characteristics ………… h. Equipment features ………… 100 3) Rank the 12 manufacturing countries from the most preferred (1) to the least preferred (12), based on your preferences for the cars they produce, if you were about to buy a new car in the near future. UK ¾ USA ¾ France ¾ Germany ¾ Japan ¾ Spain ¾ Italy ¾ Korea ¾ Rumania ¾ Russia ¾ Sweden ¾ Czech ¾ (PTO) 4) The following set of questions measures respondents’ level of attachment to his/her currently owned car. Please rate your level of agreement with each of the following statements, on a 5-point scale ranging from disagree (1) to agree (5). i) Imagine for a moment someone making fun of your car. How much would you agree with the statement, “If someone ridiculed my car, I would feel irritated” 1 2 3 4 5 ii) How much do you agree with the statement, “My car reminds me of who I am” 1 2 3 4 5 iii) Picture yourself encountering someone who would like to get to know you. How much do you think you would agree with the statement, “If I were describing myself, my car would like be something I mentioned” 1 2 3 4 5 iv) Suppose someone managed to destroy your car. Think about how you would feel. How much do you agree with the statement “If someone destroys my car, I would feel a little bit personally attacked” 1 2 3 4 5 v) Imagine for a moment that you lost your car. Think of the feelings after such an event. How much do you agree with the statement, “If I lost my car, I would feel like I’ve lost a little bit of myself” 1 2 3 4 5 vi) How much do you agree with the statement, “I don’t really have too many feelings about my car” 1 2 3 4 5 vii) Imagine for a moment someone admiring your car. How much mould you agree with the statement, “If someone praised my car, I would feel somewhat praised myself” 1 2 3 4 5 viii) Think for a moment about whether or not people who may know you might think of your car when they think of you. How much do you agree with the statement, “Probably people who know me might sometimes think of my car when they think of me” 1 2 3 4 5 ix) Imagine for a moment that you have lost your car. Think about going through your daily activities knowing that it is gone. How much do you agree with the statement, “If I didn’t have my car, I would feel a little bit less like myself” 1 2 3 4 5 5) The following set of questions measures respondents’ level of involvement with cars. Please rate your level of agreement with each of the following statements, on a 5-point scale ranging from disagree (1) to agree (5). i) It is worth the extra cost to drive an attractive and attention-getting car 1 2 3 4 5 ii) I prefer to drive a car with a strong personality of its own 1 2 3 4 5 iii) I have sometimes imagined being a racing driver 1 2 3 4 5 vi) Cars offer me relaxation and fun when life’s pressure build up 1 2 3 4 5 v) Sometimes I get too wrapped up in my car 1 2 3 4 5 vi) Cars are nothing more than appliances 1 2 3 4 5 vii) I generally feel sentimental attachment to the cars I own 1 2 3 4 5 viii) Driving my car is one way I often use to relieve daily pressure 1 2 3 4 5 ix) I do not pay much attention to car advertisements in magazines or on TV 1 2 3 4 5 x) I get bored when other people talk to me about their cars 1 2 3 4 5 xi) I have little or no interest in car races 1 2 3 4 5 xii) Driving along an open stretch of road seems to “recharge” me in body, mind and spirit 1 2 3 4 5 xiii) It is natural that young people become interested in cars 1 2 3 4 5 xiv) When I’m with a friend, we often end up talking about cars 1 2 3 4 5 xv) I don’t like to think of my car as being ordinary 1 2 3 4 5 xvi) Driving my car is one of the most satisfying and enjoyable things to do 1 2 3 4 5 xvii) I enjoy discussing cars with my friends 1 2 3 4 5