MSc/MPhil Data Analysis Assessment Attitudes to Immigration in Contemporary Britain Due: 12 noon, Monday, Week 10 MT Medium: PDF only Submission: Inspera A Professor has hired you to assist with the analysis of a UK social survey dataset on public attitudes toward immigration. The purpose of the project is twofold: 1. Identify which factors are associated with people’s views about immigration, and 2. Quantify the strength and direction of these relationships. The Professor does not have time to carry out the analysis themselves and is relying on you to examine the data carefully, justify your analytical decisions, and summarise your findings clearly. You are expected to: • Interrogate the dataset thoroughly, using appropriate descriptive and inferential methods. • Explain the analytical choices you make (e.g., how you structure models, which variables you include). • Present well-designed tables and figures that support your conclusions. • Avoid raw, unedited R output. The Professor has a strong aversion to copy-and-paste console dumps; all results must be formatted clearly and thoughtfully. • Produce a coherent narrative that explains what predicts attitudes toward immigration and how strongly. • Draw conclusions grounded strictly in your analysis of the dataset. A literature review is not required and will not be assessed. • Your final report should be no more than 3000 words, excluding tables and figures. Headings, subheadings and figure/table captions do not count toward the word limit. A bibliography is not required; you should not include one. You must include, in an appendix, all R code used to generate your results. The appendix is not included in the word count. This code should be commented clearly, so the purpose of each block is immediately obvious. If you are unsure about how to treat a variable, structure an analysis, present a result, or choose a model, you must make your own decision. The ability to do this independently is part of what is being assessed. Assignment Dataset Allocation For the purposes of this assessment, students are divided into two groups. Your allocated dataset depends on your birth month: • If you were born in January–June, you must work with dataset_ 1.RDS • If you were born in July–December, you must work with dataset_2.RDS • This allocation is fixed and you must use the dataset assigned to you. The file you have been assigned already contains only your allocated cases; the subset variable is for internal use only and you can ignore it. Codebook (same for both datasets) serial 5-digit numeric ID for each respondent. subset 1 = Dataset 1 2 = Dataset 2 age Age in whole years (18-99). female 1 = Female 0 = Male urban 1 = Urban area 0 = Rural area london 1 = Lives in London 0 = Does not live in London bornUK 1 = Born in the UK 0 = Born outside the UK graduate 1 = Degree-level qualification 0 = No degree renter 1 = Rents their home 0 = Owns their home (including mortgage) contact 1 = Has meaningful contact with immigrants 0 = No meaningful contact occ_class Occupation group: 1 = manager_prof 2 = intermediate 3 = working_class hh_inc Gross household income (£ per year). Top-coded at £200,000. imm_att5 Attitude toward immigration (1-5 scale): 1 = Very bad for Britain 2 = Quite bad 3 = Neither good nor bad 4 = Quite good 5 = Very good for Britain (Higher values = more favourable) zodiac Birth sign (categorical): 1 = Aries, 2 = Taurus 3 = Gemini 4 = Cancer 5 = Leo 6 = Virgo 7 = Libra 8 = Scorpio 9 = Sagittarius 10 = Capricorn 11 = Aquarius 12 = Pisces AI / LLM USE POLICY FOR THIS ASSIGNMENT You are permitted to use LLMs (ChatGPT, Claude, Copilot, etc.) under the following conditions: 1. You may use an LLM to help you with: • debugging your R code (e.g., “Why am I getting this error?”) • reminding you of R syntax (e.g., “How do I make a scatterplot?”) • general conceptual understanding (e.g., “What does R-squared mean?”) • explaining an output that you have already generated (e.g., “Here is my regression table — what is a slope coefficient?”) These uses are acceptable as long as the analysis is your own, based on the dataset provided. 2. You may NOT use an LLM for: • running or interpreting analyses directly from the assignment instructions without looking at your own data • writing your report for you • generating numerical results (means, SDs, p-values, correlations, regression coefficients, etc.) • inventing interpretations that do not match your actual output • selecting variables or describing patterns without referring to the actual dataset If your report contains interpretations or claims that do NOT match your submitted R output, this will be treated as academic misconduct. 3. All numerical results must come from your own R analysis of the provided dataset. 4. All figures and tables must be produced by your own script Screenshots or AI-generated plots are not acceptable. 5. You are responsible for the accuracy of everything you submit. If you use an LLM to help explain something, you must still check: • the meaning is correct, • the interpretation fits your actual numbers, • the description matches your actual plot, • nothing contradicts your analysis. 6. Your submitted R script. must run and reproduce all numbers in your report. If your text and your script. do not match, that will be taken as evidence that you did not do the analysis yourself. In short: AI tools may help you understand and debug your work. They may NOT replace your statistical analysis, your interpretation, or your judgment. Your report must reflect your own engagement with the dataset and the course material.
The Nature of Human Morality: Arguing for the Inherent Goodness of Human Nature I. Introduction A. Introduce the central claim that human nature is fundamentally good B. Explain why the discussion of innate goodness remains relevant today C. Thesis Statement: Human beings are born with natural tendencies toward kindness, empathy, and cooperation, and what we call “evil” largely arises from harmful environments, social pressures, or the absence of proper moral development. II. Philosophical Foundations: Rousseau and the Tradition of Innate Goodness A. Rousseau’s concept of the “natural man” 1. Humans are born gentle and harmless 2. Society corrupts natural goodness B. Confucian thought (Mencius) 3. “Human nature is originally good” 4. Goodness is inherent; wrongdoing arises from external forces C. Buddhist and Daoist perspectives 5. Human beings possess an originally pure heart 6. Evil is a deviation caused by desires and external disturbances III. Psychological Evidence: Goodness as an Innate Tendency A. Empathy in infants 1. Newborns cry when hearing other babies cry 2. Studies show infants prefer “helpers” over “harmers” B. Development of prosocial behavior. 3. Young children naturally comfort and share 4. Evolutionary psychology: cooperation increases group survival C. Aggression is learned rather than innate 5. Social learning theory: violence is modeled, not inborn 6. Trauma and lack of affection contribute to antisocial behavior. IV. Sociological Evidence: Harmful Environments Produce “Evil” A. Structural inequalities and violent environments 1. Crime correlates strongly with poverty and social instability 2. Harmful behavior. originates from social deprivation B. The importance of socialization 3. Positive upbringing cultivates moral behavior. 4. Toxic environments distort natural goodness C. Cultural pressures 5. Social norms can suppress or misguide innate kindness 6. Systems of oppression produce conflict, not human nature itself V. Counterarguments and Responses A. Address claims that aggression is biologically innate B. Respond to Hobbes’ “state of nature” argument C. Emphasize that changes in environment significantly alter outcomes VI. Conclusion A. Human beings possess an inherent inclination toward goodness B. Harmful behavior. emerges primarily due to external pressures and social structures C. Strengthening education, emotional support, and equitable systems can allow human goodness to flourish
Angewandte Datenanalyse Übungsmappe: 2. Teil Wintersemester 2025/26 Allgemeine Informationen Für die Anrechnung der ECTS-Punkte ist das Bestehen einer Übungsmappe Vorausset - zung. Die Übungsmappe hat 4 Teile. Sie stellt die einzige Prüfung für das Modul dar. Die Prüfungsordnung sieht vor, dass alle Teile der Mappe zusammen als eine gesamte Prüfungsleistung bewertet werden. Sie erhalten die Bewertung daher am Ende des Se- mesters. Dies ist der 2. Teil der Übungsmappe. Bearbeitungszeit rau m für den 2. Teil: 01.12.2025-15.12.2025 (23:59 Uhr) Informationen zur Bearbeitung von Teil 2 Die Aufgaben sind selbstständig zu bearbeiten. Für den 2. Teil der Übungsmappe geben Sie ein do-File und ein pdf-Dokument ab. Bitte verwenden Sie den vorbereiteten do-File und beachten Sie die dortigen Hinweise (siehe Moodle: Uebungsmappe_ 02.do) . Es muss auf jede Aufgabe eine eindeutig nachvollzieh- bare und klare Antwort geben. Zur Antwort gehört sowohl der korrekte (und vollstän- dige) Stata-Befehl als auc h eine Interpretation der Ergebnisse. Die Interpretation kann im do-File stichwortartig erfolgen. Beantwo rten Sie die Aufga ben (ggf. inkl. Interpreta- tion) zusätzlich im PDF in ausformulierten, ganzen deutschen Sätzen. Bitte laden Sie den bearbeiteten 2. Teil bis zum 15.12.2025 (23:59 Uhr) als eine pdf- Datei und gesondert beigefügten do-File (als .do-Datei) in Moodle unter Übungsmappe - Teil 2 hoch. Benennen Sie die pdf- und die do-Datei mit „Uebungsmappe_02“ und Ihrem Nachnamen (z. B. „Uebungsmappe_02_Jost“) . Wichtige Informationen zum do-File • Verwenden Sie den Vorlagen-do-File (siehe Mood le), in dem Sie arbeiten und den Sie abgeben sol len. • Benennen Sie Ihren do-File um in „ Uebungsmappe_02 “ und Ihren Nachnamen, z. B. „Uebungsmappe_02_Jost“ . • Sch reiben Sie Ih ren vol lständigen Namen in die entsprechende Zei le im do -File, z. B. // Nachname, Vorname: Jost, Lena • Passen Sie den Pfad an, der zu Ihrem Ordner mit dem Datensatz führt. • Zu jeder Teilaufgabe gi bt es einen vor be reiteten A bschnitt im do -File, z. B.: ********************************* *** Aufgabe 1: ********************************* • Stata-Befeh le sch reiben Sie unter: *** Befeh le • Interpretationen sch rei ben Sie unter: *** Interpretation • Achten S ie un bedingt darauf, dass der do-File „läuft“, d. h. dass keine Fehlermel- dung angezeigt wird, wenn man den do-File ausführt. • Fe hlerhafte Stata-Befeh le und/oder Interpretationen löschen Sie bitte vor der Ab- gabe aus dem do-File. • Bewertet wird der gesamte do-File und das pdf, achten Sie also unbedingt auf eine ordent liche und verständ liche Beantwo rtung aller Fragen! Wichtige Informationen zum pdf-Dokument • Benennen Sie Ihr pdf-Dokument ebenfalls mit „ Uebungsmappe_02 “ und Ihren Nachnamen, z. B. „Uebungsmappe_02_Jost“ . • Das pdf-Dokument sollte übersicht lich zu jeder Aufgabe eine Antwort in ganzen deutschen Sätzen beinha lten. • Ko pieren Sie N ICHT Ih ren gesamten do-File inklusive aller Sterne in das Do ku- ment! • Beachten Sie bitte das Merkblatt der Lehrbereiche Auspurg und Brüderl zu den allgemeinen Anforderungen an Übungs-/Seminararbeiten: https://www.ls4.soziologie.uni-muenchen.de/studium_lehre/schriftliche_arbei- ten. Dort finden Sie auch H inweise bzgl. Formal ia (Seitenformat, Textsatz, Aufbau, Zitierweise, Literat urverzeichnis) . • Auf der letzten Se ite Ih res PDFs fo lgt in allen Teilen der Mappe die (untersch rie- bene) Eigenständig ke itserklärung, die S ie auf der Webse ite des Instituts für Sozi- o logie finden (Bereich Studium und Lehre → Prüfungen → Hausarbeit) . Aufgaben des 2. Teils der Übungsmappe Zur Bearbeitung der Aufgaben müssen Sie den Datensatz des ALLBUS 2016 verwenden, den Sie bei Gesis herunterladen können. Sie benötigen die Datei „ZA5251_v1-1-0.dta.zip Stata (Datensatz) 657.76 KB“ . In diesem 2. Teil der Übungsmappe betrachten Sie den Zusammenhang zwischen dem Aufwachsen in Ostdeutschland und Einstellungen zur Überwachung von Bürger*innen. Dabei ist eine angemes- sene Variablenaufbereitung der erste Schritt. Aufgabe 1: Variablenaufbereitung und univariate Deskription Sie interessieren sich in diesem zweiten Teil der Übungsmappe inhaltlich nur für Personen, die einen Teil ihrer Jugend vor der Wiedervereinigung Deutschlands erlebt haben. Schränken Sie da- her das Sample auf Personen ein, die zum Befragungszeitpunkt mindestens 36 und höchstens 85 Jahre alt sind. Generieren Sie aus der Variable J011_1 eine neue dichotome Variable vid_stark, die anzeigt, ob die Befragten eine starke Meinung zur Videoüberwachung im öffentlichen Raum haben. Die Variable vid_stark soll den Wert 1 haben, wenn die Befragten die Videoüberwachung „auf jeden Fall“ oder „auf keinen Fall“ befürworten. Sie soll den Wert 0 haben, wenn die Befragten nur „eher“ dafür oder dagegen sind oder es nicht wissen. Labeln Sie die neue Variable und deren Ausprägun- gen mit selbst gewählten Labels. Prüfen Sie, ob die Rekodierung korrekt funktioniert hat. Generieren Sie aus der Variable dg03 eine Dummy-Variable jugendost, die angibt, ob die be- fragte Person ihre Jugend in Ostdeutschland (=1) oder in Westdeutschland (=0) verbracht hat. La- beln Sie die Variable mit „Jugend in Ostdeutschland“ und die Ausprägungen entsprechend mit „ja“ und „nein“ . Prüfen Sie, ob die Rekodierung korrekt funktioniert hat. Beschreiben Sie die Variable vid_stark inhaltlich aussagekräftig anhand einer (!) geeigneten Zahl. Aufgabe 2: Kreuztabelle Sie vermuten: „Personen, die ihre Jugend in Ostdeutschland verbracht haben, haben eher eine starke Meinung zur Videoüberwachung im öffentlichen Raum als Personen, die ihre Jugend in Westdeutschland verbracht haben. “ Prüfen Sie die Hypothese, indem Sie die aufbereiteten Variab- len aus Aufgabe 1 nutzen und eine Kreuztabelle in der konventionellen Darstellungsform. erstellen. Interpretieren Sie die Häufigkeitsverteilung in der Kreuztabelle inhaltlich in Hinblick auf die For- schungshypothese. Gehen Sie dabei auch auf den p-Wert des Chi²-Tests und das Cramer’s V ein. Aufgabe 3: Indexbildung Sie interessieren sich nun für die Einstellung der Befragten gegenüber „'Law & Order'-Maßnah- men“ . Diskutieren Sie die Erstellung eines einfachen additiven Index aus den Variablen J013_1 J013_2 J014_1 J014_2 J014_3. Gehen Sie dabei folgendermaßen vor: • Prüfen Sie, ob Sie die Items zuerst aufbereiten müssen. Bereiten Sie sie gegebenenfalls angemessen auf und begründen Sie Ihr Vorgehen. • Prüfen Sie die Brauchbarkeit der einzelnen Variablen anhand geeigneter Maßzahlen und wählen Sie ggf. nur geeignete Variablen. Bewerten Sie die Reliabilität des Index anhand einer geeigneten Maßzahl. • Anschließend generieren Sie den Index so, dass er den Wertebereich 1 „pro L&O“ bis 4 „contra L&O“ hat. Der Index erhält das Label „ 'Law & Order'-Einstellung (Index)“ . Zudem sollen die Ausprägungen 1 und 4 entsprechend gelabelt werden. • Zuletzt beschreiben Sie die Verteilung der Einstellungen grafisch mit einem Boxplot. Fügen Sie dem Boxplot mit Stata-Befehlen den Titel „Verteilung der L&O-Einstellung“ zu und ach- ten Sie darauf, dass die Labels für die Ausprägungen 1 und 4 angezeigt werden. (Sie müssen die Grafik nicht verbal beschreiben.) Aufgabe 4: Mittelwertvergleich Schließlich soll folgende Forschungshypothese untersucht werden: „ Personen, die in Ostdeutsch- land aufgewachsen sind, sind im Mittel eher gegen 'Law & Order'-Maßnahmen als Personen, die in Westdeutschland aufgewachsen sind.“ Führen Sie einen geeigneten Test für den Mittelwertver- gleich in Stata durch und interpretieren Sie sowohl die statistische Signifikanz als auch Richtung und Stärke des Zusammenhangs anhand geeigneter Maßzahlen. Beziehen Sie Ihre Antwort auf die zu testende Hypothese.
ME 370 Fall 2025 Due (12/18/2025) Instructions: 1- You are allowed to work in groups of up to four students. 2- Groups can be formed from students in different sections. Please ensure you list each student’s section in the report. 3- Only one report and one m-file per group should be submitted. 4- List all members' names in the report and the m-file. 5- Include a statement in your report clearly describing the contribution of each member. (-10 points if not included) 6- Thereport should be typed in Word—NO HANDWRITTEN REPORTS ALLOWED. 7- Include your plots in the report. 8- Make sure your code is working before uploading it to Canvas. IF CODE DOES NOT RUN, YOU'LL AUTOMATICALLY LOSE 50% OF THE PROBLEM'S GRADE. 9- Documents submitted past the deadline will not be accepted. 10- Do not include your MATLAB code in the report NOTE: For questions about MATLAB, please contact your class TA. For any other inquiries, feel free to ask me. Problem 1: The shear building structure is a mechanical system with an infinite number of DOF, but it can be modeled as an equivalent spring-mass system, thereby creating a lumped mass system. This is commonly done to facilitate analysis, since in some engineering applications, the parameters of interest are the frequency and vibration modes. The minimum number of coordinates necessary to describe the motion of the lumped masses and rigid bodies defines the number of degrees of freedom of the system. The system can be modeled as unidimensional due to its vibration characteristic (i.e., the horizontal vibration is more representative than the others). A three-story shear building is studied and modeled as a 3-DOF spring-mass system. Figure 1 shows a schematic of the shear building structure. Figure 1. a) Experimental model ofa three-story shear building; and b) Equivalent spring-mass model. a. Use Newton’s second law of motion to derive the system’s EOM. (Show FBDs and all derivations to receive full credit) (10 points) b. Assume: m1 = m2 = m3 = 5,000 kg k1 = k2 = k3 = k = 2 kN/m Using modal analysis, calculate the steady-state responses for: Note: Both forces are applied to the first floor 1) F(t) = 300 cos 20t N (10 points) 2) F(t) = 300δ(t) N (10 points) • For each case, clearly show your steps in your report and MATLAB code (your code should be organized similarly to “Example_431.m”). • Include your MATLAB code with your submission (as a separate m-file). If your code doesn’t work, you will automatically lose 50%. • Your code should plot both modal and physical solutions. c. List the system’s natural frequencies in order and draw conceptually the three mode shapes ofthe equivalent lumped mass system. (5 points) d. In reality, the material will exhibit some damping, preventing unbounded resonance responses. Assume a small damping ratio of ξ = 0.02 for all three modes. Redo the steady-state analysis, including damping, for the same input forces from part b. (15 points) Problem 2: With the development of social economy and construction technology, tall buildings are increasingly built in large cities, which are sensitive to earthquake and wind excitations. A tuned mass damper (TMD) is one of the most traditional vibration control devices, usually consisting of mass, stiffness, and damping elements. A pendulum TMD (PTMD) is a kind of horizontal TMD usually used to protect a tall building against horizontal vibration, where the pendulum provides the stiffness element ofTMD. The natural frequency of PTMD is a single value corresponding to the pendulum's length. You are helping design vibration control for a 3-story building in downtown Philadelphia (usually used for high buildings, but we will make an exception in this project). Your goal is to prevent excessive sway during storms and small seismic events. Using the shear-building model and a pendulum TMD mounted on the top floor, your team must tune the PTMD to ensure the upper floor motion stays below safe thresholds Figure 2. a) A simplified building model including a pendulum swinging from the top lumped mass as a tuned mass damper for motion reduction. b) The equivalent spring-mass-pendulum system used to tune the tuned-mass-damper. a. Derive the system’s EOM using the Euler-Lagrange method. Clearly show the system's kinetic and potential energies. Show all your derivations to receive full credit. (15 points) b. Choose the values for m4 and L that will keep the oscillations ofthe third floor around 7 cm peak-to-peak in response to the impulse force from problem 1 hitting the first floor. Keep in mind that 0
COURSEWORK SUBMISSION COVER SHEET Module Title Skills for the Professional Accountants Module Code ACC101 Assignment Title Financial Performance & ESG Analysis Submission Deadline 23:59, 12 December 2025 By uploading or submitting this coursework submission cover sheet, I certify the following: • I have read and understood the definitions of collusion, copying, plagiarism, dishonest use of data, and academic offences involving Artificial Intelligence (AI) as outlined in the Academic Integrity Policy of Xi’an Jiaotong-Liverpool University. • This work is my own, original work produced specifically for this assignment. It does not misrepresent the work of another person or institution, nor does it present the work generated by AI as my own. Additionally, it is a submission that has not been previously published, or submitted to another module. • This work is not the product of unauthorized collaboration between myself and others. • This work is free of embellished or fabricated data. I understand that collusion, copying, plagiarism, dishonest use of data, academic offences involving AI, and submitting procured work are serious academic misconduct. By uploading or submitting this cover sheet, I acknowledge that I am subject to disciplinary action if I am found to have committed such acts. Signature 1 …….…………Ruohan. Wei………… Date …….…………………………………… Signature 2 …….…………………………………… Date …….…………………………………… Signature 3 …….…………………………………… Date …….…………………………………… For Academic Office use: Date Received Working Days Late Penalty 1. Executive Summary An analysis of Kweichow Moutai and Wuliangye's annual reports and ESG disclosures from 2022 to 2024 reveals distinct differences in financial performance, market positioning, and environmental stewardship. While Moutai maintains high gross margins, strong brand equity, and stable revenue growth through its upstream supply chain ecosystem, its market valuation is approximately 10 times higher than Wuliangye's, reflecting substantially higher operational costs. Conversely, Wuliangye demonstrates faster revenue expansion and stronger liquidity, though its Return on Assets (ROA) and Return on Equity (ROE) metrics lag behind Moutai's, with ESG reporting lacking the same long-term perspective. Both stocks exhibit long-term investment potential, but given Moutai's superior profitability and industry leadership, we recommend prioritizing a heavy position. Investors should carefully evaluate their cost-benefit analysis and align decisions with personal risk tolerance and capital allocation objectives. 2. Introduction In 2024, the liqueur industry experienced a contraction in overall scale, with an average inventory turnover period of 900 days and a supply-demand imbalance in the market. However, the sector demonstrated a "volume decline with profit growth" trend. The total profit reached 250.87 billion yuan, up 7.8% year on year (KPMG, 2025).The total output of liquor companies in 2024 was 3,145 million liters, continuing the trend of output contraction. (Figure 1: Liquor Industry Production) Moutai and Wuliangye achieved year-on-year growth in both total operating revenue and total profit, showcasing strong profitability. This success stems from their deep-rooted brand advantages and channel control. Industry leaders have achieved steady growth by optimizing product portfolios, enhancing brand value, and strengthening channel management, further amplifying the Matthew effect. Both companies released ESG reports for 2022-2024, demonstrating strong commitment to sustainability by integrating green production, social responsibility, and corporate governance into their core strategies. However, Wuliangye's ESG report lacks key data, notably omitting its 2024 carbon emissions. This gap has resulted in lower transparency and international recognition for Wuliangye's ESG report compared to Moutai's. This report aims to provide the public with a more comprehensive understanding of both companies by analyzing and comparing their development, thereby offering more credible and comprehensive investment advice. Below, we will analyze their development and differences through a multidimensional comparison of financial performance, market positioning, and sustainable development strategies. 3. Financial Analysis The following financial ratios are derived from the 2022-2024 annual reports of Kweichow Moutai and Wuliangye. (Figure 2: Financial Ratio) 3.1. Profitability The comparison reveals that Moutai consistently outperforms in key metrics including net profit margin, gross margin, and return on equity, demonstrating superior profitability and cost control. We will now examine the reasons behind Wuliangye's lower profitability compared to Moutai. Wuliangye's under-performance in ROA and ROE metrics primarily stems from its lower proportion of premium liquor products compared to Moutai, resulting in a lower net profit margin. Moreover, Moutai's stronger brand premium and scarcity give it greater pricing power, further widening the profitability gap. (Figure 3: ROE Comparison) (Figure 4: ROA Comparison) In 2024, the median gross margin of 19 Chinese liquor companies stood at 73.16% (Xinlang, 2025) . Moutai led the pack with a 91.82% gross margin, while Wuliangye's 79.68% margin, though above the median, fell significantly short. To capture market share in lower-tier regions, Wuliangye has ramped up promotions for mid-to-low-end products, driving up their market share and dragging down overall margins. The company also implemented price cuts for premium products in 2024, reducing prices by about 8% compared to 2022 without corresponding cost reductions. In contrast, Moutai maintained its premium positioning with steady price increases. Its Feitian Moutai's ex-factory price rose by approximately 5% in 2024 compared to 2022, demonstrating strong retail pricing power. Combined with improved production capacity utilization and supply chain optimization, Moutai further solidified its high-margin advantage. 3.2. Liquidity Both Moutai's current and quick ratios have consistently remained above 3.4, demonstrating strong short-term debt repayment capacity with a steady upward trend in recent years. While Wuliangye's ratios hover around 3.4 to meet short-term debt coverage needs, they still lag behind Moutai's performance. The company's substantial promotional investments and channel inventory pressures have partially consumed cash flow, resulting in reduced capital utilization efficiency. (Figure 5: Current Ratio Comparison) (Figure 6: Quick Ratio Comparison) However, we observed a significant decline in Wuliangye's current and quick ratios in 2024. This was primarily due to the company's first-ever dividend policy implemented in Q3 2024, with the nearly 100 million yuan dividend payment not being made until January 2025. To facilitate better year-on-year comparisons, we adjusted the accounting entries for Wuliangye and calculated the current ratio at 3.80 and the quick ratio at 3.35. While these figures show a slight decrease compared to 2023, the overall ratios remain relatively stable. In terms of Cash Ratio, Wuliangye's figure is twice that of Moutai, demonstrating its continued liquidity advantage. With cash reserves exceeding its corporate scale, the company maintains strong short-term debt repayment capacity.Notably, Wuliangye maintains a high cash ratio even after implementing its dividend policy, demonstrating the foresight and flexibility in its financial management. (Figure 7: Cash Ratio Comparison) 3.3. Stability Both companies maintain an investment coverage ratio of 1, demonstrating strong debt repayment capacity and low financial risks. This indicates that they can reliably fulfill their debt obligations through stable cash flows, regardless of economic fluctuations or intensifying industry competition. While Moutai also possesses robust liquidity, Wuliangye demonstrates a slight edge in balancing capital utilization efficiency with short-term solvency. This characteristic provides greater flexibility in navigating market uncertainties, further strengthening investor confidence. However, Wuliangye's gearing ratio lags slightly behind Moutai, with a 10-percentage-point gap. This indicates a relatively high debt-to-equity ratio in its capital structure, though still within the industry's reasonable range. As previously noted, Wuliangye's dividend policy underwent a regulatory change. To ensure comparability, the adjusted gearing ratio for 2024 stands at 0.23. While this represents a slight increase from previous figures, it remains within normal parameters. The modest rise does not compromise its financial stability. 3.4. Investment Ratio Overall, Moutai's return on investment significantly outperforms Wuliangye's, with its EPS being approximately eight times Wuliangye's for three consecutive years, demonstrating stronger capital appreciation capabilities. However, Wuliangye's EPS has also shown steady annual growth, averaging about 9.5% per year, reflecting strong profit growth potential and still offering its own merits. (Figure 8: Gearing Ratio Comparison) In terms of P/E ratio, Wuliangye's current valuation slightly exceeds the industry average, reflecting market expectations for its future growth. However, it still lags behind Moutai, indicating investors' relatively conservative valuation premium. Nevertheless, as the brand's momentum gradually unfolds and channel reforms deepen—coupled with the enduring structural growth logic of premium Wuliangye continues to optimize its product portfolio and enhance operational efficiency. These efforts are steadily narrowing the valuation gap with industry leaders, thereby continuously strengthening its medium-to-long-term investment appeal. Moreover, their investment costs differ significantly. Wuliangye's stock price is around 110 yuan, making it more affordable than Moutai's 1,400 yuan, thus offering better value for money in portfolio allocation. 4. ESG / Non-Financial Analysis ESG performance is becoming a new strategic direction for Chinese liquour leading companies to cope with domestic stock competition and develop international incremental markets. 4.1. Environmental Elements: According to Moutai's three-year ESG report, the company has achieved 100% waste recycling rate for three consecutive years, with both total waste density and wastewater discharge intensity showing annual declines. Its carbon emission intensity has consistently remained below the industry average, dropping 11.3% in 2024 compared to 2022. By 2024, Moutai will achieve 100% green power coverage in its industrial parks, with renewable energy accounting for over 7% of total energy consumption (Moutai, 2024). Additionally, the company was awarded national and provincial "Green Factory" titles in 2024 (Moutai, 2025). However, the accounting and disclosure of Scope 3 are still in the early stages of industry exploration for Moutai. In contrast, Wuliangye's three-year ESG report demonstrates a consistent annual decline in carbon emission intensity, with a cumulative reduction of 9.8% from 2022 to 2024. The company maintains a stable resource utilization rate of over 95% for its brewing waste. Renewable energy now accounts for 5.2% of total energy consumption, and it plans to establish a zero-carbon brewing demonstration zone by 2025. However, some disclosures lack continuity, key environmental indicators remain unquantified, and green production measures are vaguely described. The 2024 report notably omitted core metrics such as energy and water consumption per unit. Additionally, waste treatment capacity has failed to keep pace with production growth, resulting in a continuous increase in waste disposal rates (Wuliangye, 2024). 4.2. Social Elements: From 2022 to 2024, Kweichow Moutai Group continued to advance public welfare projects such as "Moutai China Pillar," assisting over 120,000 underprivileged students, with annual public welfare investment maintaining an average annual growth rate of over 15%. The group has consistently focused on rural revitalization, investing more than 480 million yuan over three years, covering impoverished counties in over 20 provinces. The employee compensation and welfare system has been continuously improved, with per capita annual income maintaining stable growth and employee turnover rate remaining below 1% for three consecutive years (Moutai, 2022-2024). In terms of sustainable supply chain development, Moutai has continuously optimized supplier access and evaluation mechanisms, incorporating environmental, social, and governance performance into the evaluation system, promoting upstream and downstream enterprises to jointly practice sustainable development concepts. Wuliangye has been actively implementing social welfare initiatives. From 2022 to 2024, the company invested a total of 360 million yuan in rural revitalization and educational assistance programs. Its "Wuliangye Dream Support Program" has helped over 50,000 underprivileged students. Employee income grew at a compound annual growth rate of 7.8% over three years. By 2024, safety training coverage exceeded 98.6%. The retention rate of core talent has remained above 95% for three consecutive years (Wuliangye, 2022-2024). In terms of supply chain sustainability, ESG standards are being progressively integrated into procurement decisions, with enhanced social responsibility reviews of suppliers focusing on labor rights, workplace safety, and community impact. 4.3. Governance Elements: Moutai liquor won the EFQM Global Award in 2024, becoming the first Chinese liquor company to receive this honor (Moutai, 2024). Its three-year ESG report details the implementation of governance structures, risk control mechanisms, and anti-corruption policies. In 2024, Moutai launched the "double materiality" ESG framework for the first time, identified 16 key issues, upgraded "compliance internal control" to "compliance management" and integrated it into the whole process. The board of directors holds an average of 13-14 meetings annually, and the decision-making standardization continues to improve.The main controversies focus on environmental footprint and resource consumption, but the company is reducing energy consumption and emissions in the production process through technological upgrades and circular utilization models. Wuliangye has been nominated for the 2025 EFQM Global Award (Strong Soup, 2025). Its ESG report highlights continuous improvements in board structure and governance mechanisms. The board established strategic, audit, compensation, and nomination committees, along with a newly created Environmental and Social Responsibility Committee, to deepen the integration of governance and sustainable development. Wuliangye also formed a dedicated ESG task force to coordinate environmental protection, social responsibility, and corporate governance issues. Meanwhile, the company has enhanced information system transparency by regularly disclosing key data on environmental performance, supply chain management, and charitable investments. While the main controversy centers on increased environmental burdens from production capacity expansion, the company actively addresses these challenges through green factory construction, promotion of clean production, and energy efficiency improvements. Overall, Moutai and Wuliangye have emerged as industry leaders in corporate governance, ESG system development, and technological innovation strategies. However, Wuliangye still has room for improvement in data transparency, the timeliness of governance mechanisms, and long-term sustainability. It should further enhance the comprehensive disclosure and dynamic updates of ESG information. 5. Comparative Discussion Through the detailed analysis above, Moutai has significantly outperformed the industry in financial terms, rightfully earning its position as the undisputed leader. Its MSCI ESG rating has been upgraded to BBB, enhancing its long term value. Notably, in 2025, Moutai became the only company in the liquor industry to enter the Fortune China ESG Impact List (Moutai, 2025). However, Wuliangye remains one of the key leading enterprises in the liquor industry, continuously strengthening its brand value, market share, and high-end product portfolio. Although there is a gap in financial performance compared to Moutai, Wuliangye maintains a steady growth trend, with gross and net profit margins remaining at industry-leading levels. In 2025, it further advanced its digital transformation and green supply chain upgrades, achieving significant improvements in ESG ratings and narrowing the overall gap with Moutai in sustainable development. Both companies' financial and ESG reports highlight their strong commitment to sustainable development. However, investors should remain vigilant as the liqueur industry grapples with multiple challenges, including evolving consumption patterns, tightening environmental regulations, and growing international market pressures. Younger generations show limited appetite for traditional high-proof liqueur, with only 9% regularly consuming it, while over 60% prefer low-alcohol alternatives like wine and whiskey (Ailai, 2025). This shift is compelling baijiu producers to accelerate product innovation and diversify their offerings, driving a transition toward lighter, trendier, and healthier options. Wuliangye has already ventured into the youth market by launching new products like fruit-flavored baijiu and wellness-focused varieties. Our investment recommendation highlights that both companies demonstrate long-term value, but investors should closely monitor their transformation progress and ESG sustainability. Moutai remains the top choice, leveraging its brand strength and comprehensive sustainable development strategy, while Wuliangye emerges as a flexible allocation option due to its more attractive valuation and improving reform. outcomes. Investors should focus on their performance in youth-oriented initiatives, global expansion, and green production. However, given the domestic market share limit, its stock acquisition cost is relatively high, requiring investors to make prudent allocations based on their specific circumstances. Simultaneously, investors should comprehensively evaluate market conditions and policy directions according to their investment context, diversifying portfolios to balance risks. In the short term, caution is advised regarding performance fluctuations caused by weak consumption recovery, while medium-to-long term attention should be directed to the sustainability of corporate innovation capabilities and ESG practices. 6. Conclusion In summary, both Moutai and Wuliangye stand as industry leaders in China's baijiu sector, each with distinct strengths in brand equity, market expansion, and premiumization strategies. Leveraging their scarcity and strong brand influence, Moutai continues to solidify its premium positioning, while Wuliangye enhances operational efficiency through product portfolio optimization and digital transformation. Both companies are advancing green production and sustainable governance under the ESG framework. However, given the sluggish consumer recovery, they must further strengthen channel resilience and reach younger demographics. The future competition will hinge on the synergistic advancement of technological innovation efficiency and internationalization efforts. However, it is worth noting that this report has certain limitations. Firstly, the analysis is based on publicly available data from 2022-2024, which may not fully reflect the latest strategic adjustments and technological breakthroughs in the industry. Secondly, from a macroeconomic perspective, young consumers' preference for liquor continues to decline, creating uncertainties for the long-term market demand. Investors should conduct thorough evaluations of the sector, focusing on companies' ability to adapt to shifting consumption patterns, and exercise caution when making investments. Word Count: 2477 words Reference: KPMG (2024) 2025 Mid-Term Research Report on the Chinese Liqueur Market. Available at: https://assets.kpmg.com/content/dam/kpmg/cn/pdf/zh/2025/06/mid-term-research-report-on-the-chinese-baijiu-market-2025.pdf (Accessed: 18 June 2025). Xinlang (2025) The annual report of Baijiu shows that the gross profit margin of 12 liquor enterprises declines. Available at: https://finance.sina.com.cn/stock/observe/2025-05-09/doc-inevxzxu7530335.shtml (Accessed: 9 May 2025). Lingbao (2025) Analysis on the Reasons of the Continuous Decline of Wuliangye's Gross Profit Margin: Product Structure and Cost Pressure. Available at: https://www.gilin.com.cn/essence1111327.html (Accessed: 12 November 2025). Ailai (2025) Prospects of China Baijiu Industry under the Population Structure and Consumption Preference Change. Available at: https://xueqiu.com/7080538581/365363608 (Accessed: 10 December 2025). Lingbao (2025) Sustainability Analysis of 91% High Gross Margin of Kweichow Moutai in Mid-September 2025 Brand Premium and Cost Control. Available at: https://www.gilin.com.cn/essence0911543.html (Accessed: 12 September 2025). Moutai (2025) Good News! Moutai Ecological Agriculture Company Wins the Title of Provincial "Green Factory". Available at: https://www.moutaichina.com/mtjt/2025-01/02/article_2025010216495392382.html#:~: (Accessed: 2 January 2025). Moutai (2025) Kweichow Moutai has been listed in the 2025 Fortune China ESG Impact List. Available at: https://www.moutai.com.cn/mtjt/2025-05/17/article_2025051710112238098.html (Accessed: 17 May 2025). Moutai (2025) Kweichow Moutai wins EFQM Global Award (Seven Diamonds) and Outstanding Achievement Award for "Inspiring Culture". https://www.moutai.com.cn/mtjt/2024-06/06/article_2024060612092056856.html (Accessed: 6 June 2024). Qiang Tang(2025) Cultivating New Growth Momentum: Wuliangye Leads the Construction of Industrial Ecosystem. Available at: https://finance.eastmoney.com/a/202512083584662731.html (Accessed: 8 December 2025). Wuliangye (2022) Wuliangye 2022 Annual Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESZ_STOCK/2023/2023-4/2023-04-29/9172978.PDF (Accessed: 29 April 2023). Wuliangye (2023) Wuliangye 2023 Annual Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESZ_STOCK/2024/2024-4/2024-04-29/10141769.PDF (Accessed: 29 April 2024). Wuliangye (2024) Wuliangye 2024 Annual Report. Available at: https://pdf.dfcfw.com/pdf/H2_AN202504251662335463_1.pdf?1745614717000.pdf (Accessed: 26 April 2025). Moutai (2022) Moutai 2022 Annual Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESH_STOCK/2023/2023-3/2023-03-31/8941228.PDF (Accessed: 31 March 2023). Moutai (2023) Moutai 2023 Annual Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESH_STOCK/2024/2024-4/2024-04-03/9941077.PDF (Accessed: 3 April 2024). Moutai (2024) Moutai 2024 Annual Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESH_STOCK/2025/2025-4/2025-04-03/10845840.PDF (Accessed: 3 April 2025). Wuliangye (2022) Wuliangye 2022ESG Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESZ_STOCK/2023/2023- 4/2023-04-29/9173083.PDF (Accessed: 29 April 2023). Wuliangye (2023) Wuliangye 2023ESG Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESZ_STOCK/2024/2024- 4/2024-04-29/10141790.PDF (Accessed: 29 April 2024). Wuliangye (2024) Wuliangye 2024ESG Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESZ_STOCK/2025/2025- 4/2025-04-26/11001225.PDF (Accessed: 26 April 2025). Moutai (2022) Moutai 2022ESG Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESH_STOCK/2023/2023- 3/2023-03-31/8941216.PDF (Accessed: 31 March 2023). Moutai (2023) Moutai 2023ESG Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESH_STOCK/2024/2024- 4/2024-04-03/9941075.PDF (Accessed: 3 April 2024). Moutai (2024) Moutai 2024ESG Report. Available at: https://file.finance.sina.com.cn/211.154.219.97:9494/MRGG/CNSESH_STOCK/2025/2025- 4/2025-04-03/10845845.PDF (Accessed: 3 April 2025). Contribution: Group Member Name Student ID Main Contributions (choose from categories above or add brief description) Percentage Contribution (%) Ruohan.Wei 2470768 Collect data, write the report, made PPT and record video 45% Qi.Sun 2472977 Collect data, write the report, made PPT and record video Bohan.Liu Collect data, write the report, made PPT and record video Total 100% Declaration: We confirm that the above table accurately reflects the contributions of each group member. All members have reviewed and agreed on the allocation of percentages. Name & Signature Date
ENGF0003: Mathematical Modelling and Analysis I Integrated Engineering Programme (IEP) How do Fake News Spread in Social Media? ENGF0003 Project (30%) Information Coursework Release Date: 17 November 2025, 14:00 Submission Deadline: 16 January 2026, 14:00 Estimated Coursework Return: 13 February 2026 Project Guidance Read this document thoroughly before starting your work. Use the Moodle forum to ask questions about this coursework or to check answers given to your colleagues. Note that no questions posted after 9 January 2016 will be answered. How is this project different to school homework? In school you have been trained to solve mathematical problems that have a single correct solution and a small number of correct ways to solve them. However, we want to train you to deal with real-life engineering problems and to analyse them critically. In this module, most of the problems you will encounter in your coursework and project have multiple correct solutions, which depend on the method that you choose and the way you explain both your solution and justify the decisions that you make. Grade Breakdown Your grade in this coursework can be a maximum of 100 points, awarded in two tasks. § 10 points awarded for Presentation and Communication, including adherence to page limits, the Formatting Checklist below, and criteria described in the Marking Criteria section at the end of this document. We expect that your presentation and communication will have improved since the ENGF0003 Coursework. § 90 points awarded for your work in Tasks 1 and 2 in this document, with criteria described in the Marking Criteria section at the end of this document. Task 1 is in the style. of traditional mathematics questions and will support you in preparing for Task 2, which is an open-ended design task. Academic Integrity Academic integrity means being transparent about your work. For the ENGF0003 project all the following rules apply: § Do not share and do not copy project solutions, figures, tables or MATLAB code from your peers. § Reference books, articles, or teaching resources that you used on the last page of your document. Read about how to reference someone else's work here, and how to avoid plagiarism here. § Do not publish assessment materials in external online forums or “homework help” websites such as Chegg, Course Hero, etc. § Do not paste assessment material into Generative Artificial Intelligence (GenAI) tools such as Co-Pilot, ChatGPT, Gemini, Claude, DeepSeek, etc. You are not allowed to use GenAI tools to completely or partially write the coursework for you. Read more about using GenAI here. § You are allowed to use GenAI in an assistive capability, such as helping you write code and proofreading your document, but you must run, check, and explain results by yourself. § Remember that GenAI is not completely reliable and can produce incorrect answers. § We routinely detect high similarity in AI-generated work across large cohorts. Note: There are serious consequences for breaching UCL’s assessment regulations. Formatting Checklist Formatting and presentation will make up for 10% of your grades in this project. This project is marked anonymously, so do not write down your name or student number anywhere in your submission. Our online systems will automatically link your submission to your Moodle ID after the project is marked. You are required to type your answers in Microsoft Word or in LaTeX. Submit a single pdf file named ENGF0003Project. Table 1. Formatting guidance for the ENGF0003 Project. What How q Use Heading styles. q Apply page numbering. q Use a sans serif font e.g. Arial, Calibri, Helvetica. (Not Times Roman). q Use font size of 12 points. q Use bullet points to break down long paragraphs. q Number figures and tables throughout the document. q Insert Captions for figures and tables. q Use the Word Equation Editor for inserting equations. · To apply a style. to selected text: Home tab, Styles group, choose one of the inbuilt Heading styles series, e.g. Heading 1, Heading 2. · To insert page numbering: Insert tab, Page Number. · To number a figure or table, right-click it and select Insert Caption… · To insert a caption for a figure or table, fill in the “Caption” field in Insert Caption… · To insert an equation in Word, use the Insert tab then click on Equation. Background Social media platforms such as Instagram, WhatsApp, and TikTok have revolutionised human communication and connection. From the emergence of memes – adaptable and humorous image templates that capitalise on and make fun of social norms – to emoji communication, social media has shaped much of the look and feel of popular culture and discourse over the last decade. However, social media platforms have more recently started to be used as a source of news and information. According to research findings from the Reuters institute, more than half of people in the United States (US) get news from platforms such as Facebook and X (formerly Twitter). In the UK, an Ofcom’s News survey found that the number of adults consuming news from traditional sources such as TV is falling year-on-year. This survey also found generational differences in news consumption, where 88% of respondents between 16 and 24 years old reported getting their news from social media. Figure 1. An AI-generated picture of late Pope Francis wearing a puffer jacket. Many people now know this image as "Balenciaga Pope", making reference to the similarities with the style. of the Spanish luxury fashion house. Despite being designed to connect humans globally and facilitate communication, social media has also enabled the spread of “fake news”, false stories or narratives about a person, social group, or event. For example, in March 2023 an AI-generated photo of late Pope Francis rocking a puffer jacket went viral on social media (Figure 1). While many perceived the image to be a funny meme, others genuinely assumed it was a real photo. Likewise, during the COVID-19 pandemic much misinformation was spread on social media. An example was a conspiracy theory that 5G internet technology was linked to the spread of coronavirus. TikTok videos from very popular accounts sharing the conspiracy were shared and viewed thousands of times, and the rumour was even endorsed by a number of celebrities. Problem Fake news is a complex phenomenon because people do not necessarily need to believe it to share it. The UK House of Commons Library has published a resource in 2024 discussing factors contributing to the spread of fake news being as diverse as alignment with pre-existing beliefs and values, emotional responses, trust in the source of fake news, or even repeated exposure to them. Social media amplifies the spread of fake news with fast publication of content, “likes and comments”, and peer-to-peer sharing, making it fundamentally different from the flow of information on traditional media such as newspapers, radio, and television where there are robust fact-checking systems to prevent and rectify inaccurate information. Given that social media algorithms track user engagement and reward highly viral content, in this project you will study mathematical models originally developed to model the spread of contagious diseases to understand the spread of fake news in digital networks. The SEDIS model The SEDIS model, standing for susceptible (), exposed (), doubter () and infected () models individuals in a network where fake news has been spread. This model assumes conservation of the number of individuals in the network , such that . The SEDIS model is given by the following system of ordinary differential equations: (1a) (1b) (1c) and (1d) where and respectively represent the fraction of individuals in the total population that are susceptible, exposed, doubtful or infected by fake news. The constants and are the transmission rates between states in units of [1/time]. Table 2 expands on the meaning of the model parameters. Table 2. Parameter interpretations for SEDIS model in equations 1a - 1d. Parameter Interpretation Fraction of individuals susceptible to fake news Fraction of individuals exposed to fake news Fraction of individuals doubter of fake news Fraction of individuals that believe fake news (infected) Transition rate from susceptible to exposed state Transition rate from exposed to doubter Transition rate from exposed to infected Transition rate from exposed to susceptible Transition rate from doubtful to susceptible Transition rate from infected to susceptible Transition rate from doubtful to infected Figure 2 illustrates the model given in equations 1a – 1d. The model states that as fake news spreads; the majority of the population is initially susceptible to believing it. Susceptible individuals become exposed to fake news with a ratio , and once the individual has been exposed to fake news there are three possibilities: § completely disregarding them and going back to being susceptible with rate , § becoming doubtful of the fake news with rate , § believing the fake news and becoming infected with rate . Individuals that are doubtful might not believe the fake news and go back to being susceptible with rate or believe the fake news and become infected with rate . Infected individuals might eventually find out the truth and go back to being susceptible with rate . Figure 2. Network schematic of the SEDIS model. Solid arrows represent forward steps in infection and dashed arrows represent recovery back to a susceptible state. Task 1: Exploring the SEDIs model [30 marks] [5 pages maximum] Recommended reading resource: HELM 6: Matrices, pages 25-28. A. [10 marks] Express the system of equations 1a – 1d in matrix form. and verify that the sum of any column in the matrix of coefficients is zero. Use linear algebra to demonstrate a mathematical interpretation for this fact and identify at least one assumption of this model that makes it unrealistic for real-life social networks. B. [10 marks] Find non-trivial expressions for the points where simultaneously. Provide an interpretation for what is happening in the network at this steady-state point. C. [10 marks] Show that 1a – 1d have strictly positive solutions for all and derive expressions for the lower bound of these solutions. Task 2: The effect of doubters [60 marks] [6 pages maximum] Your task is to understand how the parameters and initial conditions associated with the doubter state influence the evolution of the infected state in time. You will need to reflect on the mathematical results obtained in Task 1, as well as use MATLAB to solve the system of equations 1a – 1d algebraically and numerically for a wide range of conditions and parameters. Once you have analysed your numerical results, discuss what they imply about the importance of doubters in a social network and explicitly connect your findings to your mathematical work in Task 1. Your work will be evaluated based on criteria set out in A and B below. We suggest you dedicate up to 2 pages for Task 2.A and up to 4 pages for Task 2.B. A. [30 marks] The clarity and rationale behind the design of a numerical study in MATLAB to address this problem. This will involve clearly describing your plan of analysis, including an outline of parameter ranges and initial conditions, the methods and MATLAB functions you will use, and a justification for these choices. Please find specific guidance below: i. Present a table outlining the range of parameters and initial conditions you chose to study in your model. Write up to two paragraphs discussing your rationale for choosing them. ii. Create a flowchart/schematic to represent your methodology. Include the mathematical and computational methods you will use and describe how they are connected to one another. iii. Describe your computational implementation in a short paragraph. Your goal is to write transparently so that someone reading your project could replicate it and check for themselves if the results are correct. B. [30 marks] The quality of your results, as well as the way that you describe them in words and discuss them. This will involve displaying your results effectively and concisely in visual and numerical forms and providing concise and accurate written interpretations. Please find some guidance below: i. Include one figure contrasting the solutions of the system across different initial conditions but with fixed transmission rates. Explain how the initial conditions affect the long-term behaviour of the solution. ii. Include one figure displaying how varies according to the rates associated with the doubter state for two sets of initial conditions explored in part B.i. Discuss your results and describe the role of the doubter state in the model. iii. Conclude your report with an appraisal in 2 to 3 paragraphs of the validity and limitations of this model. Use this appraisal to design and present the schematic of a model that would be more realistic. Summarise your proposed model in three bullet points.
In-Depth Analysis of YOLOv8 PPE Intelligent Detection System: Operation Process and Operational Logic I. System Architecture Overview This is a comprehensive Personal Protective Equipment (PPE) intelligent detection system built on the YOLOv8 deep learning model, utilizing Python's Flask framework to construct a web application that achieves real-time video stream processing, object detection, data persistence, and visualization. The core architecture consists of four primary modules: video acquisition and processing, object detection, data management, and web service interfaces. The system demonstrates enterprise-level design considerations, supporting multiple video sources including physical cameras, virtual cameras, video files, RTSP network streams, and HTTP video streams. This flexibility ensures the system can be deployed in various environments, from edge devices to cloud servers. II. System Initialization Process 2.1 Application Startup and Environment Configuration When the system starts, it first executes environment initialization, as shown in app.py: load_dotenv() os.makedirs('screenshots', exist_ok=True) app = Flask(__name__) This code loads environment variable configurations, creates a temporary screenshot storage directory, and initializes the Flask application instance. The system also defines a Samba mount point SAMBA_MOUNT_POINT = '/mnt/samba' for network file sharing, reflecting enterprise deployment considerations. The creation of the screenshots directory with exist_ok=True ensures the system doesn't fail if the directory already exists, demonstrating defensive programming practices. This temporary storage serves as a buffer before screenshots are processed and their metadata is saved to the database. 2.2 Multi-Source Camera Configuration Mechanism The system implements a flexible multi-source camera support mechanism defined in app.py: CAMERA_SOURCES = { 'default': 0, # Default camera 'virtual': 10, # Virtual camera (v4l2loopback) 'file': 'test_video.mp4', # Video file 'usb': 1, # USB camera 'rtsp': 'rtsp://username:password@ip:port/stream', 'http': 'http://ip:port/video' } This design showcases the system's high extensibility, supporting physical cameras, virtual cameras, video files, RTSP network streams, and HTTP video streams. The system attempts initialization using a priority-based order: camera_attempts = [ ('default', CAMERA_SOURCES['default']), ('virtual', CAMERA_SOURCES['virtual']), ('file', CAMERA_SOURCES['file']), ] This degradation strategy ensures that when physical cameras are unavailable, the system can continue running through virtual cameras or test video files. This approach is crucial for development, testing, and deployment in containerized environments where direct hardware access may be limited. 2.3 Camera Initialization and Parameter Configuration The system attempts different camera sources through iterative initialization: for source_name, source_value in camera_attempts: try: if source_name == 'file': if not os.path.exists(source_value): print(f"Video file {source_value} not found, skipping...") continue camera = cv2.VideoCapture(source_value) else: camera = cv2.VideoCapture(source_value) if camera.isOpened(): camera.set(cv2.CAP_PROP_BUFFERSIZE, 1) width = int(camera.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = camera.get(cv2.CAP_PROP_FPS) This code demonstrates rigorous error handling logic: it first verifies whether the video file exists, then attempts to open the camera, and upon success, configures the buffer size to 1 to reduce latency, while retrieving the video stream's resolution and frame. rate parameters. The buffer size setting camera.set(cv2.CAP_PROP_BUFFERSIZE, 1) is a critical configuration for optimizing real-time performance, minimizing the delay between frame. capture and processing. 2.4 YOLO Model Loading The system loads the lightweight YOLOv8 nano model: try: model = YOLO('yolov8n.pt') print("YOLO model loaded successfully") except: print("Warning: YOLO model not found, using mock detection") model = None This fault-tolerant design allows the system to run in demo mode when the model file is missing, preventing the entire application from crashing due to model loading failures. The choice of the nano model (yolov8n.pt) balances detection accuracy with computational efficiency, making it suitable for real-time applications on resource-constrained devices. III. Video Stream Processing Core Logic 3.1 Generator Pattern for Frame. Streaming The system employs Python's generator pattern to implement streaming video transmission, which is the standard practice for Flask video streaming: def generate_frames(): global last_screenshot_time while True: if camera_available and camera: success, frame. = camera.read() if not success: frame. = create_demo_frame() else: frame. = create_demo_frame() This infinite loop continuously reads camera frames or generates demo frames. The global last_screenshot_time tracks screenshot timing intervals. When camera reading fails, the system automatically switches to demo mode, ensuring the user interface always has video output. This seamless fallback mechanism is essential for maintaining system availability. 3.2 Object Detection Execution The detection logic incorporates multiple optimization parameters: if model: results = model.predict(frame, conf=0.6, iou=0.8, imgsz=640, half=True, max_det=10, stream_buffer=True, agnostic_nms=True, vid_stride=12) Each parameter serves a specific purpose: · conf=0.6: Confidence threshold of 60%, filtering low-confidence detections · iou=0.8: Intersection over Union threshold for non-maximum suppression · imgsz=640: Input image size, balancing speed and accuracy · half=True: Enables half-precision inference, improving GPU performance · max_det=10: Maximum of 10 detections, preventing overload · stream_buffer=True: Enables stream buffering optimization · agnostic_nms=True: Class-agnostic non-maximum suppression · vid_stride=12: Video frame. stride, processing every 12th frame, significantly reducing computational burden The vid_stride=12 parameter is particularly important for real-time performance. By processing only every 12th frame. at 30 fps, the system effectively analyzes approximately 2.5 frames per second, which is sufficient for PPE compliance monitoring while dramatically reducing computational requirements. 3.3 Intelligent Screenshot Triggering Mechanism The system implements time-interval-based intelligent screenshot capture: if results and results[0].boxes: current_time = time.time() if current_time - last_screenshot_time >= screenshot_interval: screenshot_thread = threading.Thread(target=take_screenshot, args=(results,)) screenshot_thread.start() last_screenshot_time = current_time This code triggers screenshots only when objects are detected and the time interval (5 seconds) is satisfied, using a separate thread threading.Thread to execute screenshot operations, avoiding blocking the main video stream. This asynchronous processing design is crucial for high-performance real-time systems. The threading approach ensures that potentially slow I/O operations (file writing, database insertion) don't interrupt the continuous video stream processing. 3.4 Frame. Encoding and Transmission Detected frames are encoded via JPEG and transmitted in multipart format: ret, buffer = cv2.imencode('.jpg', detected_frame, [int(cv2.IMWRITE_JPEG_QUALITY), 90]) frame_bytes = buffer.tobytes() yield (b'--framer ' b'Content-Type: image/jpegr r ' + frame_bytes + b'r ') The JPEG quality is set to 90, balancing image quality and transmission bandwidth. The yield keyword transforms the function into a generator, implementing the HTTP multipart/x-mixed-replace protocol for streaming transmission. This protocol is specifically designed for server push scenarios, where the server continuously sends updated content to the client without requiring new HTTP requests. IV. Screenshot and Data Persistence Flow 4.1 Screenshot Metadata Construction The take_screenshot function implements a complete screenshot workflow: def take_screenshot(results): hostname = socket.gethostname() current_time = datetime.datetime.now().strftime("%Y%m%d%H%M%S") screenshot_fileLoc = f'screenshots/{hostname}_{current_time}.jpg' fileName = screenshot_fileLoc[len('screenshots/'):-len('.jpg')] File naming includes hostname and timestamp, ensuring uniqueness and traceability. This naming convention is particularly important in distributed deployments where multiple detection nodes might operate simultaneously. The inclusion of hostname prevents file name collisions when screenshots from different machines are aggregated in a central storage system. 4.2 Missing PPE Item Identification The system identifies undetected PPE items through set operations: completeArr = [0, 1, 2, 3, 5, 7] # person, bicycle, car, motorcycle, airplane, bus if results and results[0].boxes: classArray = results[0].boxes.cls.numpy().copy() notFoundArr = np.setdiff1d(np.array(completeArr), np.array(classArray)).tolist() print("NOTFound" + str(notFoundArr)) np.setdiff1d calculates the set difference, identifying item categories that were not detected. This design employs reverse thinking: instead of recording what was detected, it records what is missing, which is crucial for safety monitoring. In PPE compliance systems, knowing what protective equipment is absent is more actionable than knowing what is present. The completeArr array defines the expected objects in the scene. In a production PPE system, this would be customized to include actual PPE items like helmets, safety vests, goggles, and gloves rather than the demo objects (person, bicycle, car, etc.) used in this implementation. 4.3 Database Writing Each missing PPE item generates a database record: for value in notFoundArr: db.upload_metadata(fileName, 'screenshots', hostname, datetime.datetime.now(), int(value + 1)) The db.upload_metadata function is implemented in db.py: def upload_metadata(filename, filepath, hostname, datetime_obj, detectedobject): object_names = { 1: 'person', 2: 'bicycle', 3: 'car', 4: 'motorcycle', 5: 'airplane', 6: 'bus', 7: 'train', 8: 'truck', 9: 'boat', 10: 'traffic light' } object_name = object_names.get(detectedobject, f'object_{detectedobject}') cursor.execute(''' INSERT INTO undetected_items (filename, filepath, hostname, dateandtime, detectedobject, object_name) VALUES (?, ?, ?, ?, ?, ?) ''', (filename, filepath, hostname, datetime_obj, detectedobject, object_name)) This function maps numerical category IDs to readable names and uses parameterized queries to prevent SQL injection attacks. The use of parameterized queries is a fundamental security practice that separates SQL code from data, preventing malicious input from altering the query structure. 4.4 Temporary File Cleanup After screenshots are saved to the database, local files are immediately deleted: try: os.remove(screenshot_fileLoc) empty_temp() except FileNotFoundError: print(f"File '{screenshot_fileLoc}' not found. Skipping removal.") The empty_temp() function cleans the entire temporary directory: def empty_temp(): folder_name = "screenshots" folder_path = os.path.join(os.path.dirname(__file__), folder_name) if os.path.exists(folder_path): file_list = os.listdir(folder_path) for file_name in file_list: file_path = os.path.join(folder_path, file_name) if os.path.isfile(file_path): os.remove(file_path) This design prevents disk space exhaustion, which is essential for long-running monitoring systems. In production environments, this cleanup mechanism would typically be coordinated with network storage systems (like the Samba mount referenced in the code) where processed screenshots could be archived before local deletion. V. Web Service Interface Design 5.1 Home Route The home page supports both GET and POST requests: @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'GET': return render_template('index.html') elif request.method == 'POST': data = request.json print(data) return render_template('index.html', data=data), 200 This route both provides the web interface and receives client data, implementing frontend-backend interaction. The dual-method approach allows the same endpoint to serve the initial page load (GET) and process user-submitted data (POST), simplifying the API structure. 5.2 Video Feed Endpoint @app.route('/video_feed') def video_feed(): return Response(generate_frames(), mimetype='multipart/x-mixed-replace; boundary=frame') This is the system's core endpoint, returning a multipart response. Browsers continuously receive new frames through an tag, achieving real-time video display. The multipart/x-mixed-replace MIME type is specifically designed for streaming scenarios, where each part replaces the previous one in the browser's rendering. 5.3 Detection Record Queries The system provides two query endpoints: @app.route('/updates') def logs(): try: data = db.get_all_detections() return render_template('updates.html', data=data) except: return render_template('updates.html', data=[]) @app.route('/logs') def update(): try: data = db.get_recent_detections(limit=15) return render_template('contents2.html', data=data) except: return render_template('contents2.html', data=[]) /updates returns all detection records, while /logs returns the 15 most recent records. Data is retrieved from the SQLite database through functions in db.py: def get_recent_detections(limit=15): conn = connect() cursor = conn.cursor() cursor.execute("SELECT * FROM undetected_items WHERE object_name='person' ORDER BY dateandtime DESC LIMIT ?;", (limit,)) data1 = cursor.fetchall() # ... other object type queries return { 'person': data1, 'bicycle': data2, # ... other categories } Data is returned grouped by item type, facilitating categorized display on the frontend. This organization allows users to quickly filter and analyze compliance issues by specific PPE item types. 5.4 Dynamic Camera Switching The system supports runtime camera source switching: @app.route('/switch_camera', methods=['POST']) def switch_camera(): global camera, camera_available, current_camera_source, width, height, fps data = request.get_json() new_source = data.get('source', 'demo') if camera and camera.isOpened(): camera.release() camera = None This endpoint releases current camera resources, then initializes a new camera based on the requested source type. Supported source types include demo mode, test video, virtual camera, and physical camera. This hot-swapping capability is particularly valuable during system testing and deployment, allowing operators to switch between different video sources without restarting the application. VI. Error Handling and Demo Mode 6.1 Demo Frame. Generation When cameras are unavailable, the system generates demo frames: def create_demo_frame(): frame. = np.zeros((480, 640, 3), dtype=np.uint8) cv2.rectangle(frame, (50, 50), (590, 430), (255, 255, 255), 2) cv2.putText(frame, "SmartSafety PPE Detection", (100, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(frame, "Demo Mode - No Camera", (120, 150), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2) timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") cv2.putText(frame, timestamp, (10, 470), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1) This function creates a black background frame. with system information and real-time timestamp, ensuring the user interface has visual feedback under any circumstances. The demo mode serves multiple purposes: it allows system testing without hardware dependencies, provides a clear indication to users about the system status, and maintains UI consistency. 6.2 Graceful Shutdown The system registers signal handlers for graceful shutdown: def cleanup(): empty_temp() if camera_available and camera: camera.release() sys.exit(0) signal.signal(signal.SIGTERM, cleanup) When responding to SIGTERM signals, the system cleans temporary files and releases camera resources, avoiding resource leaks. This is essential for containerized deployments where clean shutdowns prevent resource accumulation across container restarts. VII. Database Architecture Design 7.1 Table Structure db.py defines two core tables: cursor.execute(''' CREATE TABLE IF NOT EXISTS undetected_items ( id INTEGER PRIMARY KEY AUTOINCREMENT, filename TEXT NOT NULL, filepath TEXT NOT NULL, hostname TEXT NOT NULL, dateandtime DATETIME NOT NULL, detectedobject INTEGER NOT NULL, object_name TEXT ) ''') cursor.execute(''' CREATE TABLE IF NOT EXISTS detection_stats ( id INTEGER PRIMARY KEY AUTOINCREMENT, date DATE NOT NULL, total_detections INTEGER DEFAULT 0, missing_ppe_count INTEGER DEFAULT 0, compliance_rate REAL DEFAULT 0.0 ) ''') The undetected_items table records each detected missing item, while the detection_stats table aggregates daily statistical data, including total detections, missing PPE counts, and compliance rates. This two-tier data structure supports both detailed incident tracking and high-level trend analysis. 7.2 Statistical Calculations The system calculates safety compliance rates: def update_detection_stats(total_detections, missing_ppe_count): compliance_rate = ((total_detections - missing_ppe_count) / total_detections * 100) if total_detections > 0 else 100 cursor.execute(''' INSERT OR REPLACE INTO detection_stats (date, total_detections, missing_ppe_count, compliance_rate) VALUES (?, ?, ?, ?) ''', (today, total_detections, missing_ppe_count, compliance_rate)) Compliance rate = (detections - missing items) / detections × 100%, which is a key KPI for safety management. This metric provides quantifiable evidence of workplace safety performance and can trigger alerts when compliance falls below acceptable thresholds. VIII. Auxiliary Tools and Extensions 8.1 Test Video Generation create_test_video.py generates a 60-second test video: def create_test_video(): width, height = 640, 480 fps = 30 duration = 60 total_frames = fps * duration fourcc = cv2.VideoWriter_fourcc(*'mp4v') ut = cv2.VideoWriter('test_video.mp4', fourcc, fps, (width, height)) for frame_num in range(total_frames): frame. = np.zeros((height, width, 3), dtype=np.uint8) # Draw simulated person and PPE equipment The video includes periodically appearing and disappearing helmets, safety goggles, and reflective vests, simulating real work scenarios. This test video is invaluable for system development and demonstration, providing consistent, reproducible input data. 8.2 Virtual Camera Setup The setup_virtual_camera.sh script. automates virtual camera configuration: modprobe v4l2loopback devices=1 video_nr=10 card_label="SmartSafety_Virtual_Camera" exclusive_caps=1 This creates a virtual camera device at /dev/video10, solving the problem of no physical camera in containerized environments. The v4l2loopback driver creates virtual Video4Linux2 devices that can be fed with video streams programmatically, enabling testing and development without physical hardware. IX. System Advantages and Engineering Practices This system demonstrates multiple excellent engineering practices: 1. Layered Architecture: Clear module separation (app, db, fs) improves maintainability 2. Fault-Tolerant Design: Multi-level degradation mechanisms ensure system robustness 3. Asynchronous Processing: Threaded screenshots avoid blocking the main process 4. Resource Management: Automatic temporary file cleanup and graceful shutdown 5. Flexible Configuration: Support for multiple video sources and runtime switching 6. Performance Optimization: Frame. stride, half-precision inference, buffer control 7. Data Integrity: Parameterized queries and transaction management The system implements a complete closed loop from video acquisition, real-time detection, data persistence to web visualization through approximately 1,800 lines of code, making it an excellent example of industrial-grade AI applications. The modular design allows individual components to be tested, updated, or replaced independently, while the comprehensive error handling ensures operational reliability in production environments.
COMP3069 Computer Graphics Coursework (Assessment 3) - 2025 Description This assignment is compulsory and is worth 70% of your final mark for COMP3069. It is due for submission by 16:00 on 22 December 2025 (Monday). Late submissions will receive a penalty of 5% of the assignment grade per day and no submission will be allowed after 16:00 on 26 December 2025 (Friday). You should submit two files: 1. A zip file containing your Visual Studio solution, including code, textures, shader files, executable, `include` and `lib` directories. 50% 2. A single PDF file containing your report (description and demonstration). 20% General Requirements In this coursework, you are required to apply what you have learnt about 3D computer graphics and graphics programming to create and display an animated, interactive 3D scene. The coursework will test your understanding of core computer graphics concepts and your ability to implement them using a modern graphics API. Whenever possible, please first consider and apply the topics, concepts, and mechanisms covered in the lectures and lab sessions, including, but not limited to: 1. Graphics APIs (e.g., OpenGL, GLFW, GLSL, GLM), 2. 3D Modelling, 3. 3D Transformations, 4. Cameras, 5. Textures, 6. Lighting, and 7. Anti-aliasing. You are also required to write a report to outline how you have met the requirements. The report should contain screenshots of your program (code) and output (rendered scene) as part of your demonstration (10%), and a detailed description (2000 – 3000 words) of your implementation (10%). Technical Requirements Figure 1 shows a reference image containing various 3D models. Figure 1: A reference image containing 3D models (https://en.wikipedia.org/wiki/List_of_common_3D_test_models) You are expected to use your imagination and try to be creative when designing an outdoor scene. The technical requirements are as follows. 1. You should create all 3D models/objects (including 3D modelling & 3D transformations) and display them in your scene by referring to Figure 1. The connections and the overall appearance (including the colours) of the models/objects should be as similar to those in the reference image as possible. 2. The scene must contain all models/objects, and the models/objects must be animated (e.g., a sailing boat) whenever possible. 3. You need to create at least three lights of different types (i.e., directional, positional, and spotlights). You should be able to switch between the lights interactively. The on and off of the directional light will display the scene at day and night times. 4. You need to find and apply textures (including at least images showing a sky and a river) that look realistic and merge the scene (background) to the objects in Figure 1. 5. You need to implement two different types of interactive cameras (i.e., model-viewer and fly-through cameras) to demonstrate your scene from different viewpoints. You should be able to switch between these cameras and allow user-controlled viewing via keyboard/mouse input. 6. The objects (e.g., the wheel) in the scene must be interactive, responding to user input (keyboard/mouse) with changes (e.g., transformation, colour, animation state) whichever and whenever possible. 7. You need to apply anti-aliasing mechanisms to smooth the edges of rendered objects. There is little flexibility to account for your creative choices. You might want to: 1. Model objects by manually defining vertex data, or 2. Procedurally generate some vertex data. However, marks will be awarded based on the effective application of concepts and mechanisms taught in COMP3069. The more you demonstrate your understanding and use of the course material, the higher the marks you can expect. Submission Before submiƫ ng your code, test it thoroughly using Visual Studio 2022. Marks will be lost for programs that do not compile or have issues linking to resources (e.g., shaders, textures). Please submit your work by 4.00pm on 22 December 2025. Please note that you can have only one submission of your work. Due to file size limits on Moodle, you are required to zip your entire coding solution and upload it to OneDrive using your UNNC account. Then, share this zip file with the module markers: Dr. Yue Li at Yue.Li3@noƫ ngham.edu.cn Dr. Wooi Ping Cheah at Wooi-Ping.Cheah@noƫ ngham.edu.cn Prof. Sean He at Sean.He@noƫ ngham.edu.cn Create a shareable link for the zip file and paste this link into the text box on the submission site in Moodle. You must also submit your report in PDF format as a file attachment via the Moodle submission site. If the PDF file is larger than 10MB, you may zip it before submission. Ensure that the markers can successfully download your zip file from the provided OneDrive link, unzip it, open the `.sln` file in Visual Studio 2022, compile the project, and run the executable without any errors or additional configuration. The report is compulsory. Failure to submit a report will result in a zero mark for the entire coursework.
In-Class Essay: Are Plants Conscious? Context: You have read about consciousness in humans (Nagel and Chalmers and PEterson) and learned about plant intelligence in AWARE, The Mother Tree Project, Pollen’s “Ted Talk”, and Entangled Life (Sheldrake). Task: Using ideas from Nagel and/or Chalmers, write an essay in which you explain whether you think plants are conscious or not. In your essay: ● Start by explaining what consciousness means in your own words. ● Use at least one idea from Nagel or Chalmers to support your reasoning. ● Use examples from AWARE, The TED talk by Simard or Pollen, and/or Sheldrake to support your ideas. ● End by explaining your own opinion about what this tells us about what it means to be conscious. Expectations: ● Write 3–4 paragraphs. ● Use specific examples from class materials (you can note the source in parentheses, like AWARE or Sheldrake). ● Keep your focus on explaining your reasoning clearly. ● USE YOUR TRANSITIONS Helpful Starters: ● “According to Nagel, consciousness means …” ● “ In AWARE, scientists show that plants …” ● “ I think plants are / are not conscious because …” ● “This shows that consciousness might mean …”
COMP 9517 Computer Vision Project Specification Limit your answers to 300 words per question. Short, to-the-point and clear answers are expected. Point form. is acceptable, but discouraged. Throughout all of the material provided for this assignment there is an abundance of discussions on distributed database design. While that is extremely important, it is beyond the scope of this course and should not be considered relevant to the assignment. It may be impossible to avoid mentioning database issues, but they should not be seen as the focus (i.e. don’t go down that rabbit hole). Be sure to not use the buzz words found (as this is not a marketing course) — your overall goal, for all questions, is to connect what industry discusses/provides with the foundational design principles discussed in class. If you use the words “The Cloud”, “Docker” or “Kubernetes”, you will immediately be given a 0. Answer the following questions: 1.1: Why is load balancing important? Where in the infrastructure does a load balancer sit? How does it choose which machine should accept the request? How does this work with “elastic computing”? 1.2: Netflix uses Chaos Engineering to what? Why would Netflix choose this as a solution? What is the positive result of the Chaos? 1.3: What is a Message Broker? What popular Message Broker solutions exist? Give an example, with a drawn diagram, of how it would work in an application. 1.4: Read Scaling Memcache at Facebook. How was the zone syncing done, why was it done that way?
CE315 MOBILE ROBOTICS (a) Explain the receiver operating characteristic curve in classification. (2 mark) (c) Describe the situation for which you will prefer to use Student’s t-test, Welch’s t-test, and Wilcoxon Rank-Sum test to conduct hypothesis testing, respectively. (2 marks) (a) Describe the following concepts in association rule mining: support, confidence, lift, and leverage. (2 marks) (a) Explain what the concept “residual” means in linear regression. (2 mark) (a) Assume that you are working on a classification problem as a data scientist. It is found that the dataset contains many correlated variables and most of them are categorical variables. Which of the following classifiers would be most suited for modelling this dataset: logistic regression, decision tree, and naïve Bayes classifier. Explain your answer. (3 marks) Question 15 (6 marks) (b) Describe deep convolutional neural networks for image classification. (3 marks)
2410 US craft beers This assignment is due by 3pm on Tuesday, May 3rd and is worth 25% of your final grade. You can do this assignment in group of up to three, with a single submission. Your job with the Operations Research consulting company is going well. Your boss would like you to continue to work with Pacific Paradise to improve their vaccine distribution strategy and attempts to eradicate this virus. Communications to you from their team will be provided through Blackboard. The first communication for this assignment will appear at 4pm on Monday, April 4th with the final communication appearing at 4pm on Tuesday, April 26th. You will need to prepare a report for your boss and a presentation for the client: Section A – Report to your boss, to include: A general mathematical formulation for each of the two problems, including definitions of sets, data, variables, objective function and constraints. 7 marks Two Python files with the problems modelled for Gurobi. This should be easy to relate back to the formulation. Your boss will attempt to execute these models. 5 marks Section B – Report to the client, to include: Written responses that clearly and concisely address the needs of the client given through the communications. 5 marks A short video presentation (maximum 7 minutes) where you summarise your results and provide insights into the solution, such as identifying key constraints or explaining the effects on costs of additional constraints provided by the client. 8 marks Submit your report and Python files via Blackboard, using PDF for the report (saved from Word or created in LaTeX). There will be a separate assignment upload for your video file. Only one submission per group is necessary but make sure all names are clearly shown on your report. Each student will receive separate data from the client but a group need only consider one data set in the report.
Artificial Intelligence Use TASK 1 TASK 2 Deliverables: – A diagram representing the state transition of the robot. – A brief description of the states and transition. – The robot’s java file. You should try your best to implement the robot as the state machine designed by yourself. Marks will be deducted if the robot does not implement as designed.
Individual Assignment: Text Analytics 1. Instructions In this assignment, you will be required to write Haskell functions that simplify playing of the variation of UNO. 1.1 Data File Specification An example of properly formatted file is shown in Figure 1. 2. One Player, One Move The first part (onePlayerOneMove in the file csce322h0mework03part01.hs) will take in thre (3) arguments (a discard pile, a deck, and a hand) and return the state of the game (hand/deck/discard) that is the result of the player in possession of the hand playing a card. The precedence for playing a card is as follows: Extend a Wild Draw 4 if that is the most recently played (left-most) card (behind a r,-, g,-, b,-, or y,-) Extend a Draw 2 if that is the most recently played card (not behind a r,-, g,-, b,-, or y,-) Play the left-most card that matches the color of the most recently played card in the discard pile Play the left-most Wild Draw 4 Play the left-most card that matches the symbol of the most recently played card in the discard pile Play the left-most Wild Draw (add to the back of the hand) the left-most card in the deck If playing a Wild (or Wild Draw 4), the next player needs to know which color to play next. If the hand still contains non-Wild cards, play the card c,- (where c is the color of the left-most non-Wild card remaining in the hand). If the hand only contains Wild cards play r,-. If the hand has been emptied (and, therefore, the game won), play -,-. In the event that you need to extend a Wild Draw 4 or Draw 2 and cannot, you must draw all of the cards (possibly) built-up from other players extending Wild Draw 4s or Draw 2s. If a Draw 4 was played, it would be directly behind a r,-, g,-, b,-, or y,-. Once you draw the number of cards required (or the size of the deck, whichever is smaller), place a copy of the a r,-, g,-, b,-, or y,- from the front of the discard pile on the front of the discard pile. This will let future players know that they must continue with a card of that color. 3. One Player, Many Moves The second part (onePlayerManyMoves in the file csce322h0mework03part02.hs) will take in three (3) arguments (a discard pile, a deck, and a hand), and returns the hand/deck/discard that is the result of the player in possession of the hand playing as many cards in a row as they can before emptying their hand or being unable to continue playing cards. The same rules for precedence of moves as onePlayerOneMove applies. 4. Many Players , One Move The third part (manyPlayersOneMove in the file csce322h0mework03part03.hs) will take in three (3) arguments (a discard pile, a deck, and a list of hands) and return the game (hands/deck/discard) that is the result of n turns being taken for a game with n players. The same rules for precedence apply, but skip and reverse cards will have these effects: if Player p plays a reverse on turn t, Player p − 1 will take turn t + 1 (or Player n will take turn t + 1 if Player 1 played the reverse) assuming the turns are proceeding in ascending order. If turns are proceeding in descending order Player p + 1 will take turn t + 1 (or Player 1 will take turn t + 1 if Player n played the reverse). If Player p plays a skip on turn t, Player p + 2 will take turn t + 1 (or Player 1 will take turn t + 1 if Player n − 1 played the skip, or Player 2 will take turn t + 1 if Player n played the skip) if turns are proceeding in ascending order. If turns are proceeding in descending order, Player p − 2 will take turn t + 1 (or Player n − 1 will take turn t + 1 if Player 1 played the skip, or Player n will take turn t + 1 if Player 2 played the skip). 5. Many Players , Many Moves The fourth part (manyPlayersManyMoves in the file csce322h0mework03part04.hs) will take in three (3) arguments (a discard pile, a deck, and hands) and return the game (hands/deck/discard) that is the result of a game being played to its conclusion. Instead of n players combining to take n turns, turns will be taken following the rules of manyPlayersOneMove until a player empties their hand or the player whose turn it is cannot continue the game (either by playing a card or drawing a card from the deck). 6. Naming Conventions Your files should follow the naming convention of csce322h0mework03part01.hs, csce322h0mework03part02.hs, csce322h0mework03part03.hs, and csce322h0mework03part04.hs. 6.1 Helpers.hs A file named Helpers.hs has been provided with the functionality to read the .uno files into matrices. If a modified Helpers.hs file is not included with your submission, the default will be used in its place. 7. webgrader Note Submissions will be tested with ghc. cse.unl.edu is currently running version 8.0.2 of ghc. 8. Point Allocation 9. External Resources Learn Haskell Fast and Hard Learn You a Haskell for Great Good! Red Bean Software Functional Programming Fundamentals The Haskell Cheatsheet
DSCI553 Foundations and Applications of Data Mining FALL 2021 Assignment 3 Deadline: October. 26th 11:59 PM PST 1. Overview of the Assignment In Assignment 3, you will complete two tasks. The goal is to familiarize you with Locality Sensitive Hashing (LSH), and different types of collaborative-filtering recommendation systems. The dataset you are going to use is a subset from the Yelp dataset used in the previous assignments. 2. Assignment Requirements 2.1 Programming Language and Library Requirements a. You must use Python to implement all tasks. You can only use standard python libraries (i.e., external libraries like numpy or pandas are not allowed). There will be a 10% bonus for each task (or case) if you also submit a Scala implementation and both your Python and Scala implementations are correct. b. You are required to only use the Spark RDD to understand Spark operations. You will not receive any points if you use Spark DataFrame. or DataSet. 2.2 Programming Environment Python 3.6, JDK 1.8, Scala 2.12, and Spark 3.1.2 We will use these library versions to compile and test your code. There will be no point if we cannot run your code on Vocareum. On Vocareum, you can call `spark-submit` located at /opt/spark/spark-3.1.2-bin-hadoop3.2/bin/spark-submit`. (*Do not use the one at `/home/local/spark/latest/bin/spark-submit (2.4.4)) 2.3 Write your own code Do not share your code with other students!! We will combine all the code we can find from the Web (e.g., GitHub) as well as other students’ code from this and other (previous) sections for plagiarism detection. We will report all the detected plagiarism. 3. Yelp Data In this assignment, the datasets you are going to use are from: https://drive.google.com/drive/folders/1SufecRrgj1yWMOVdERmBBUnqz0EX7ARQ?usp=shar ing We generated the following two datasets from the original Yelp review dataset with some filters. We randomly took 60% of the data as the training dataset, 20% of the data as the validation dataset, and 20% of the data as the testing dataset. a. yelp_train.csv: the training data, which only include the columns: user_id, business_id, and stars. b. yelp_val.csv: the validation data, which are in the same format as training data. c. We are not sharing the test dataset. d. other datasets: providing additional information (like the average star or location of a business) 4. Tasks Note: This Assignment has been divided into 2 parts on Vocareum. This has been done to provide more computational resources. 4.1 Task1: Jaccard based LSH (2 points) In this task, you will implement the Locality Sensitive Hashing algorithm with Jaccard similarity using yelp_train.csv. In this task, we focus on the “0 or 1” ratings rather than the actual ratings/stars from the users. Specifically, if a user has rated a business, the user’s contribution in the characteristic matrix is 1. If the user hasn’t rated the business, the contribution is 0. You need to identify similar businesses whose similarity >= 0.5. You can define any collection of hash functions that you think would result in a consistent permutation of the row entries of the characteristic matrix. Some potential hash functions are: f(x)= (ax + b) % m or f(x) = ((ax + b) % p) % m where p is any prime number and m is the number of bins. Please carefully design your hash functions. After you have defined all the hashing functions, you will build the signature matrix. Then you will divide the matrix into b bands with r rows each, where b x r = n (n is the number of hash functions). You should carefully select a good combination of b and r in your implementation (b>1 and r>1). Remember that two items are a candidate pair if their signatures are identical in at least one band. Your final results will be the candidate pairs whose original Jaccard similarity is >= 0.5. You need to write the final results into a CSV file according to the output format below. Example of Jaccard Similarity: user1 user2 user3 user4 business1 0 1 1 1 business2 0 1 0 0 Jaccard Similarity (business1, business2) = #intersection / #union = 1/3 Input format: (we will use the following command to execute your code) Python: spark-submit task1.py Scala: spark-submit --class task1 hw3.jar Param: input_file_name: the name of the input file (yelp_train.csv), including the file path. Param: output_file_name: the name of the output CSV file, including the file path. Output format: IMPORTANT: Please strictly follow the output format since your code will be graded automatically. We will not regrade because of formatting issues. a. The output file is a CSV file, containing all business pairs you have found. The header is “business_id_1, business_id_2, similarity”. Each pair itself must be in the alphabetical order. The entire file also needs to be in the alphabetical order. There is no requirement for the number of decimals for the similarity value. Please refer to the format in Figure 2. Figure 2: a CSV output example for task1 Grading: We will compare your output file against the ground truth file using precision and recall metrics. Precision = true positives / (true positives + false positives) Recall = true positives / (true positives + false negatives) The ground truth file has been provided in the Google drive, named as “pure_jaccard_similarity.csv”. You can use this file to compare your results to the ground truth as well. The ground truth dataset only contains the business pairs (from the yelp_train.csv) whose Jaccard similarity >=0.5. The business pair itself is sorted in the alphabetical order, so each pair only appears once in the file (i.e., if pair (a, b) is in the dataset, (b, a) will not be there). In order to get full credit for this task you should have precision >= 0.99 and recall >= 0.97. If not, then you will get only partial credit based on the formula: (Precision / 0.99) * 0.4 + (Recall / 0.97) * 0.4 Your runtime should be less than 100 seconds. If your runtime is more than or equal to 100 seconds, you will not receive any point for this task. 4.2 Task2: Recommendation System (5 points) In task 2, you are going to build different types of recommendation systems using the yelp_train.csv to predict the ratings/stars for given user ids and business ids. You can make any improvement to your recommendation system in terms of the speed and accuracy. You can use the validation dataset (yelp_val.csv) to evaluate the accuracy of your recommendation systems, but please don’t include it as your training data. There are two options to evaluate your recommendation systems. You can compare your results to the corresponding ground truth and compute the absolute differences. You can divide the absolute differences into 5 levels and count the number for each level as following: >=0 and =1 and =2 and =3 and =4: 12 This means that there are 12345 predictions with < 1 difference from the ground truth. This way you will be able to know the error distribution of your predictions and to improve the performance of your recommendation systems. Additionally, you can compute the RMSE (Root Mean Squared Error) by using following formula: Where Predi is the prediction for business i and Ratei is the true rating for business i. n is the total number of the business you are predicting. In this task, you are required to implement: Case 1: Item-based CF recommendation system with Pearson similarity (2 points) Case 2: Model-based recommendation system (1 point) Case 3: Hybrid recommendation system (2 point) 4.2.1. Item-based CF recommendation system Please strictly follow the slides to implement an item-based recommendation system with Pearson similarity. 4.2.2. Model-based recommendation system You need to use XGBregressor(a regressor based on the decision tree) to train a model. You need to use this API https://xgboost.readthedocs.io/en/latest/python/python_api.html, the XGBRegressor inside package xgboost. Please choose your own features from the provided extra datasets and you can think about it with customer thinking. For example, the average stars rated by a user and the number of reviews most likely influence the prediction result. You need to select other features and train a model based on that. Use the validation dataset to validate your result and remember don’t include it into your training data. 4.2.3. Hybrid recommendation system. Now that you have the results from previous models, you will need to choose a way from the slides to combine them together and design a better hybrid recommendation system. Here are two examples of hybrid systems: Example1: You can combine them together as a weighted average, which means: = α× _ + (1 − α)× _ The key idea is: the CF focuses on the neighbors of the item and the model-based RS focuses on the user and items themselves. Specifically, if the item has a smaller number of neighbors, then the weight of the CF should be smaller. Meanwhile, if two restaurants both are 4 stars and while the first one has 10 reviews, the second one has 1000 reviews, the average star of the second one is more trustworthy, so the model-based RS score should weigh more. You may need to find other features to generate your own weight function to combine them together. Example2: You can combine them together as a classification problem: Again, the key idea is: the CF focuses on the neighbors of the item and the model-based RS focuses on the user and items themselves. As a result, in our dataset, some item-user pairs are more suitable for the CF while the others are not. You need to choose some features to classify which model you should choose for each item-user pair. If you train a classifier, you are allowed to upload the pre-trained classifier model named “model.md” to save running time on Vocareum. You can use pickle library, joblib library or others if you want. Here is an example: https://scikit-learn.org/stable/modules/model_persistence.html. You also need to upload the training script. named “train.py” to let us verify your model. Some possible features (other features may also work): -Average stars of a user, average stars of business, the variance of history review of a user or a business. -Number of reviews of a user or a business. -Yelp account starting date, number of fans. -The number of people think a users’ review is useful/funny/cool. Number of compliments (Be careful with these features. For example, sometimes when I visit a horrible restaurant, I will give full stars because I hope I am not the only one who wasted money and time here. Sometimes people are satirical. :-)) Input format: (we will use the following commands to execute your code) Case1: spark-submit task2_1.py Param: train_file_name: the name of the training file (e.g., yelp_train.csv), including the file path Param: test_file_name: the name of the testing file (e.g., yelp_val.csv), including the file path Param: output_file_name: the name of the prediction result file, including the file path Case2: spark-submit task2_2.py Param: folder_path: the path of dataset folder, which contains exactly the same file as the google drive. Param: test_file_name: the name of the testing file (e.g., yelp_val.csv), including the file path Param: output_file_name: the name of the prediction result file, including the file path Case3: spark-submit task2_3.py Param: folder_path: the path of dataset folder, which contains exactly the same file as the google drive. Param: test_file_name: the name of the testing file (e.g., yelp_val.csv), including the file path Param: output_file_name: the name of the prediction result file, including the file path Output format: a. The output file is a CSV file, containing all the prediction results for each user and business pair in the validation/testing data. The header is “user_id, business_id, prediction”. There is no requirement for the order in this task. There is no requirement for the number of decimals for the similarity values. Please refer to the format in Figure 3. Figure 3: Output example in CSV for task2 Grading: We will compare your prediction results against the ground truth. We will grade on all the cases in Task2 based on your accuracy using RMSE. For your reference, the table below shows the RMSE baselines and running time for predicting the validation data. The time limit of case3 is set to 30 minutes because we hope you consider this factor and try to improve on it as much as possible (hint: this will help you a lot in the competition project at the end of the semester). Case 1 Case 2 Case 3 RMSE 1.09 1.00 0.99 Running Time 130s 400s 1800s For grading, we will use the testing data to evaluate your recommendation systems. If you can pass the RMSE baselines in the above table, you should be able to pass the RMSE baselines for the testing data. However, if your recommendation system only passes the RMSE baselines for the validation data, you will receive 50% of the points for each case. 5. Submission You need to submit following files on Vocareum with exactly the same name: a. Four Python scripts: ● task1.py ● task2_1.py ● task2_2.py ● task2_3.py b. [OPTIONAL] hw3.jar and Four Scala scripts: ● task1.scala ● task2_1.scala ● task2_2.scala ● task2_3.scala 6. Grading Criteria (% penalty = % penalty of possible points you get) 1. You can use your free 5-day extension separately or together. (Google Forms Link for Extension: https://docs.google.com/forms/d/e/1FAIpQLSeSHzGWzPi3iuS-zNYyDLb-hhP4ancMEZgKDiwYZLmhyY hKFw/viewform. ) 2. There will be a 10% bonus if you use both Scala and Python. 3. We will combine all the code we can find from the web (e.g., Github) as well as other students’ code from this and other (previous) sections for plagiarism detection. If plagiarism is detected, you will receive no points for the entire assignment and we will report all detected plagiarism. 4. All submissions will be graded on Vocareum. Please strictly follow the format provided, otherwise you won’t receive points even though the answer is correct. 5. If the outputs of your program are unsorted or partially sorted, there will be 50% penalty. 6. Do NOT you use Spark DataFrame, DataSet, sparksql. 7. We can regrade your assignments within seven days once the scores are released. We will not accept any regrading requests after a week. There will be a 20% penalty if our grading is correct. 8. There will be a 20% penalty for late submissions within a week and no points after a week. 9. Only if your results from Python are correct will the bonus of using Scala be calculated. There is no partial points awarded for Scala. See the example below: Example situations Task Score for Python Score for Scala(10% of previous column if correct) Total Task1 Correct: 3 points Correct: 3 * 10% 3.3 Task1 Wrong: 0 point Correct: 0 * 10% 0.0 Task1 Partially correct: 1.5 points Correct: 1.5 * 10% 1.65 Task1 Partially correct: 1.5 points Wrong: 0 1.5 7. Common problems causing fail submission on Vocareum/FAQ (If your program runs seem successfully on your local machine but fail on Vocareum, please check these) 1. Try your program on Vocareum terminal. Remember to set python version as python3.6, And use the latest Spark /opt/spark/spark-3.1.2-bin-hadoop3.2/bin/spark-submit 2. Check the input command line format. 3. Check the output format, for example, the header, tag, typos. 4. Check the requirements of sorting the results. 5. Your program scripts should be named as task1.py task2.py etc. 6. Check whether your local environment fits the assignment description, i.e. version, configuration. 7. If you implement the core part in Python instead of Spark, or implement it in a high time complexity way (e.g. search an element in a list instead of a set), your program may be killed on Vocareum because it runs too slowly. 8. You are required to only use Spark RDD in order to understand Spark operations more deeply. You will not get any point if you use Spark DataFrame. or DataSet. Don’t import sparksql.
CS-350 – Fundamentals of Computing Systems Design a class that hold the personal data: name, address, age and phone number. Write appropriate methods (constructor, getters ad setters. Demonstrate the class by writing a program that creates three instances of the class. You can populate information in each object using Scanner class. Please do not use any personal information as data in the project. Submit a class diagram, test runs and code (.java file) with your submission. Please create a zip file and submit a single attachment for part 1. Create a coin toss simulation program. The simulation program should toss coin randomly and track the count of heads or tails. You need to write a program that can perform. following operations: a. Toss a coin randomly. b. Track the count of heads or tails. c. Display the results. Design and Test Let’s decide what classes, methods and variables will be required in this task and their significance: Write a class called Coin. The Coin class should have an Instance variable sideUp. The sideUp field will hold either “heads” or “tails” indicating the side of the coin that is facing up. The Coin class should have following methods: A void method named toss, which simulates the tossing of a coin. When the toss method is called, it randomly determines the side of the coin that is facing up (“heads” or “tails”) and sets the sideUp field accordingly. A no-arg constructor, which randomly determines the side of the coin, that is facing up (“heads” or “tails”) and initializes the sideUp field accordingly. A method named getSideUp that returns the value of the sideUp field. Create a toss method that uses loop to toss the coin 20 times. Each time the coin is tossed, display the side that is facing up. The method should keep count of the number of times heads or tails is facing up and display those values after the loop finishes. Write the test program, which has main method and demonstrates the Coin class.
Assessment 3 and 4 overarching outline The assignment will involve using modern OpenGL to render a scene. Scene graphs are required in the modelling process and animation controls are required for hierarchical models. Figure 1 shows a room scene containing three objects, a robot and a window looking out onto a view. The whole scene can be modelled using transformed planes, cubes and spheres. The scene shows five poses of a robot that has entered a museum exhibition room. Pose 1 is on entering the room. Pose 2 is viewing a large mobile phone displayed on a plinth. Pose 3 is viewing a spotlight on the floor as it swings side to side on a stand. Pose 4 is viewing a large egg on a stand. Pose 5 is looking out of the window. 3. Requirements 3.1 The room • The walls and floor should be texture mapped to look like a room in a museum. For example, the floor could be made of wood. The walls should have a paint pattern on them. • An outside scene can be seen through the window,for example, this might be a garden scene or a city scene. You could use a picture out of a window in your own accommodation or you could invent a picture.• Consider how you might do the scene outside: o Or should there be a hole in the wall and a texture map pasted onto another surface that is a certain distance outside the window? This will mean making the wall from a set of pieces, e.g. eight abutting pieces with the window as a middle area. Figure 2b illustrates this.
INFS2200/7903 PROJECT ASSIGNMENT General instructions • We are provided you with a scaffold to build your code on, i.e. starter code. You should download the scaffold first and then start coding. As with Assignments 1 and 2, you can find this code either your workspace on Ed or on mycourses. • Search for the keyword updated to find any places where this PDF has been updated. • T.A. office hours will be posted on mycourses under the “Office Hours” tab. For zoom OH, if there is a problem with the zoom link or it is missing, please email the T.A. and the instructor. • If you would like a TA to look at your solution code outside of office hours, you may post a private question on the discussion board. •We have provided you with some examples to test your code. (See TestEx-posedA3.java on mycourses. The same tests are run on Ed when you submit.) If you pass all these exposed tests, you will get 59/100. Unlike with Assignments 1 and 2, the remaining tests will be private. Therefore, we strongly encourage you to come up with more creative test cases, and to share these test cases on the discussion board. There is a crucial distinction between sharing test cases and sharing solution code. We encourage the former, whereas the latter is a serious academic offense. • Late policy: Late assignments will be accepted up to two days late and will be penalized by 10 points per day. If you submit one minute late, this is equivalent to submitting 23 hours and 59 minutes late, etc. So, make sure you are nowhere near that threshold when you submit. Please see the submission instructions at the end of this document for more details. Tertiary Search Tree (TST) The purpose of this assignment is to give you some experience with recursion. Recursion is a fundamental method in programming and the sooner you learn it, the better! The topic for this assignment is trees. You will work with a data structure that we will call a tertiary search tree or TST.1 This is like a binary search tree (BST), except that each node in a TST has three children rather than two. We can call the three children left, middle, and right. Each node stores a element whose class type implements Comparable, so you can assume access to a compareTo() method. The subtrees defined by the left, middle, and right children of a node have elements that are less than, equal to, or greater than the element at that node, respectively. Lecture 25 gave pseudocode for binary search trees (BST) methods. You will implement similar methods for your TST class. Make sure you understand the BST pseudocode before you try to implement a TST version! You will work with three classes: TSTNode class (25 points) A TSTNode has four fields: an element, and three children (left, middle, right). For this class, you will write: • a constructor TSTNode(T element) (5 points) – assigns the given element to this node; the children are automatically initialized to null. • height() (10 points) – returns an int which is the height of the subtree, whose root is this TSTNode • findMin() (5 points)– returns the TSTNode containing the minimum element in the tree; you can use this as a helper method. • findMax() (5 points)– return the TSTNode containing the maximum element in the tree; you can use this as a helper method. You may add your own helper methods to the TSTNode class. For example, you may wish to overload the constructor. TST class (55 points) A TST has just one field: the root TSTNode of the tree. A TST has several methods that are provided to you: • height() – returns an int which is the height of the tree; it depends on the height() helper method in the TSTNode class (see above) • toString()– toString() can be used to visualize the tree structure; recall that you can ’call’ the toString() method using System.out.print(tree) where tree is of type TST; 1This is not to be confused with a “ternary search tree” which is something else. Also, others may have used the term “tertiary search tree” to mean something other than what we mean here. We will ignore these other usages of the term.