APPLIED MACHINE LEARNING - 2026/7
Module code: EEEM068
Module Overview
Machine learning and deep learning have emerged as core areas within computer science and artificial intelligence, drawing on methods from statistics, applied mathematics, pattern recognition, and neural network computation. This module provides a comprehensive introduction to the theory and application of advanced machine learning and deep learning techniques.
The module explores how these methods are applied across a wide range of domains, including natural language processing, large language models, medical imaging, healthcare, audio and speech processing, computer vision, multimodal large language models, large multimodal models and financial technologies (fintech). Students will gain an understanding of how modern AI systems are designed to process and analyse diverse data types such as images, video, text, and audio.
The deep learning algorithms covered in this module are widely used in industry, from AI start-ups to major technology companies such as Google, Meta, Microsoft, Nvidia, Amazon, and Tesla. The module combines theoretical foundations with practical implementation, enabling students to develop, evaluate, and apply machine learning models to real-world problems.
Module provider
Computer Science and Electronic Eng
Module Leader
RANA Muhammad (CS & EE)
Number of Credits: 15
ECTS Credits: 7.5
Framework: FHEQ Level 7
Module cap (Maximum number of students): N/A
Overall student workload
Independent Learning Hours: 72
Lecture Hours: 22
Laboratory Hours: 22
Guided Learning: 10
Captured Content: 24
Module Availability
Semester 2
Prerequisites / Co-requisites
Strictly speaking there are no pre-requisite and co-requisites.
However, it will be very beneficial to have basic knowledge of machine learning. Knowledge of EEEM066 Fundamentals of machine learning module will be a plus. If you do not have knowledge of basic machine learning and EEEM066 it will be beneficial to focus extra in the first few weeks and you may consider taking a small number of extra pre-recorded videos during the first 2 to 3 weeks.
Module content
Multilayer perceptrons (MLPs), convolutional neural networks (CNNs): basic operations, separable convolutions, skip connections. Graph convolutional neural networks (GCNNs) and graph attention networks (GATs). Architectural design principles influenced by modern deep learning models.
Learning paradigms including zero-shot and few-shot learning, domain generalisation, transfer learning, and cross-domain representation learning.
Attention mechanisms including self-attention and scaled dot-product attention. Transformer architectures including encoder¿decoder frameworks and their role in representation learning. Key Transformer-based models such as BERT and GPT. Efficient CNN architectures such as MobileNet and hybrid CNN¿Transformer design principles.
Vision Transformers (ViTs): vanilla/isotropic ViTs, hierarchical and hybrid variants, and efficient ViT architectures. MLP-Mixer models and token-mixing alternatives for vision tasks. Unified design perspectives across CNNs, ViTs, and hybrid architectures.
Self-supervised learning (SSL): introduction and motivation, pretext tasks, and representation learning frameworks. Contrastive learning methods and global embedding approaches to self-supervised representation learning such as SimCLR and MoCo-v3, BYOL, Barlow Twins, VICReg, and DINO.
Masked image modelling (MIM) and foundation models for vision. The first working MIM method for transformer, i.e., SIT (Self-supervised vIsion Transformer) which maked a milestone in computer vision and related applications by enabling self-supervised pretraining to outperform supervised pretraining. Methods which are copying or extending the ideas of SIT including SimMIM, MAE, BEiT, iBOT, and DINOv2. Extensions of masked modelling approaches to different application areas like audio, medical, video etc. Applications of vision foundation models across domains.
Efficient fine-tuning strategies for large pretrained models such as LoRA, AdaLoRA etc.
Multimodal vision¿language models and foundation models. Motivation for multimodal learning and transformer-based multimodal architectures. Vision¿language pretraining approaches including generative/autoregressive methods (UNITER, OSCAR, VinVL, ViLT, SimVLM) and contrastive methods (CLIP, ALIGN, LiT, DeCLIP, SLIP, UniCL, ImageBind). Vision-language models enhanced with vision adapters for large language models, Discuss vision LLMs methods such as Flamingo, LLaVA, and CogVLM.
Applications across a broad range of domains including computer vision, natural language processing, large language models (LLMs), multimodal large language models (MLLMs), large multimodal models (LMMs), audio and speech processing, medical imaging, financial technologies (fintech), document analysis, and multimodal data understanding.
Assessment pattern
| Assessment type | Unit of assessment | Weighting |
|---|---|---|
| Coursework | COURSE PROJECT REPORT | 50 |
| Examination | Invigilated Computer-based Exam (2 hours) | 50 |
Alternative Assessment
None
Assessment Strategy
The assessment strategy for this module is designed to provide students with the opportunity to demonstrate both the theoretical knowledge and practical skills developed throughout the module. The aim of the assessment is to evaluate a range of machine learning concepts alongside the ability to apply these in practical contexts. Examples of concepts and skills assessed include:
- Knowledge of various machine learning theories and, importantly, their applications.
- Understanding of engineering methodologies for the design and implementation of applied machine learning algorithms.
- Knowledge of state-of-the-art solutions to machine learning and data analysis problems, including their limitations and the need for improved approaches.
- Skills in identifying, classifying, and evaluating the performance of applied AI and ML systems and components using analytical methods and modelling techniques.
- Programming skills and proficiency in applied AI and ML tools, development environments, libraries, and reusable components such as Python, NumPy, scikit-learn, and PyTorch. These tools are used for data processing, analysis, and implementation of machine learning algorithms.
Given the applied nature of the module, the summative assessment consists of the following components:
- Invigilated Computer-based Exam (2 hours) (50%)
Conducted during the exam period, this assessment evaluates individual understanding of key concepts, methods, and problem-solving skills covered in the module. - Group Coursework (50%)
Students will work in groups on a course project selected from a range of provided topics (typically 10¿12 options are provided). The coursework enables students to demonstrate their understanding of advanced machine learning algorithms within a chosen application area. Students are expected to design, implement, and analyse solutions, and to clearly communicate their findings.
Throughout the module, students will participate in a weekly two-hour lab session that supports learning through formative assessment and feedback. These sessions include exercises involving coding tasks related to data processing, analysis, pattern recognition, and classification.
The purpose of the lab sessions is to enable students to apply theoretical knowledge gained in lectures to practical problems across the core themes of the module. The coursework project further develops this by requiring students to explore advanced machine learning techniques within a specific application domain and to critically evaluate their results.
Students will receive formative feedback through question-and-answer sessions during lectures, lab problem sheets, supervised lab sessions, and feedback on their coursework.
Module aims
- The aim of this module is to provide an in-depth understanding of modern deep neural networks and their associated applications. These advanced AI and deep learning techniques are widely applicable across a broad range of domains, including computer vision, robotics, natural language processing, large language models, security and surveillance, medical image analysis, audio and speech processing, multimodal analysis, large-scale multimedia content retrieval, and financial technologies (fintech).
The module aims to equip students with both the theoretical foundations and practical skills required to design, implement, and evaluate state-of-the-art machine learning and deep learning models. Emphasis is placed on understanding the strengths and limitations of different approaches, as well as selecting appropriate methods for specific problem domains.
Ultimately, the goal is to prepare students to tackle real-world challenges by applying advanced AI and deep learning techniques in a rigorous, effective, and responsible manner. - The module also aims to provide opportunities for students to learn about the Surrey Pillars listed below.
Learning outcomes
| Attributes Developed | Ref | ||
|---|---|---|---|
| 001 | Demonstrate a clear understanding of the fundamental principles of machine learning and data analysis. | K | M1 |
| 002 | Analyse novel pattern recognition and data analysis problems and construct suitable statistical models for their solution. | KCPT | M2, M16, M17 |
| 003 | Select appropriate methods for problems in domains such as computer vision, medical imaging, natural language processing, audio and speech processing, and fintech, and interpret and evaluate the results of their application. | KCPT | M3, M4 |
| 004 | Formulate theoretical solutions to machine learning and AI problems across a broad range of applications, including computer vision, medical imaging, audio and speech processing, natural language processing, multimodal large language models, and fintech. | KCT | M3, M6 |
Attributes Developed
C - Cognitive/analytical
K - Subject knowledge
T - Transferable skills
P - Professional/Practical skills
Methods of Teaching / Learning
The module is delivered through a combination of lectures, pre-recorded video materials, and laboratory sessions, supported by independent study. Students are expected to engage with captured video content prior to selected lectures to enable deeper in-class discussion and active learning. Learning is assessed through a combination of coursework and an invigilated computer-based examination.
These teaching and learning methods are designed to achieve the following aims:
- To introduce advanced deep learning concepts and their applications, promoting a deep understanding of modern artificial intelligence techniques and associated engineering methodologies for the design and implementation of applied AI & ML. Topics include multilayer perceptrons, convolutional neural networks, separable convolutions, graph convolutional networks, attention mechanisms, transformers, graph attention networks, domain transfer and generalisation, zero-shot and few-shot learning, self-supervised learning, multimodal learning, large language models (LLMs), large multimodal models (LMMs), multimodal LLMs (MLLMs). The module also develops engineering principles for designing and integrating complex machine learning and AI systems, including aspects of verification and validation.
- To enable students to identify, classify, and evaluate the performance of applied AI & ML systems and components using analytical methods and modelling techniques. Students develop the ability to make informed performance trade-offs and apply appropriate evaluation metrics to predict and assess the behaviour of advanced AI, machine and deep learning systems. They are trained in the application of quantitative methods, mathematical modelling, and computational tools, including Python and PyTorch, to solve real-world data-driven problems. These skills are applied across domains ranging from healthcare, security, finance, and entertainment.
- To develop programming proficiency and familiarity with modern AI & ML tools, development environments, libraries, and reusable components for AI & ML. Students gain practical experience using frameworks such as Python and PyTorch, and other relevant machine learning libraries, to implement models for tasks including classification, segmentation, localisation, regression, self-supervised learning, and multimodal analysis.
Indicated Lecture Hours (which may also include seminars, tutorials, workshops and other contact time) are approximate and may include in-class tests where one or more of these are an assessment on the module. In-class tests are scheduled/organised separately to taught content and will be published on to student personal timetables, where they apply to taken modules, as soon as they are finalised by central administration. This will usually be after the initial publication of the teaching timetable for the relevant semester.
Reading list
https://readinglists.surrey.ac.uk
Upon accessing the reading list, please search for the module using the module code: EEEM068
Other information
- Digital capabilities
By introducing students to high performance computing servers needed to complete the coursework; by enriching their programming skills for the AI discipline via Python and the various deep learning libraries
- Employability
By teaching students practical deep learning AI & ML skills which is the most welcomed skillset in modern AI; by talking through practical application such as in computer vision, medical image analysis, audio and speech processing, large language models, multimodal large language models, large multimodal models, fintech etc.; by introducing a practical coursework component that addresses specific industrial problems
- Global and cultural capabilities
By studying ethics-related topics around AI & ML, computer vision; by teaching computer vision techniques around data bias and domain adaptation, which is cornerstone to deploying AI & ML, computer vision solutions across multiple cultures and in global scale.
Many of you are aware of the Babbage teaching compute cluster.
The projects in this modules are very advanced and practical. To support these projects university has built a GPU cluster (mini-super computer) using a scheduling system which is running on the lab machines. This means when any lab machine is empty it can be used as part of the mini-super computer. You can apply for access here: https://docs.pages.surrey.ac.uk/research_computing/other_services/babbage/index.html. Access is managed by CS&EE and they onboard students regularly. If you are a student at Surrey you can contact the team using babbage-support@surrey.ac.uk. These machines have 16GB GPUs in them and there are a large number (in hundreds) of them available as well as some 8GB GPUs for smaller tests. You can use these machines effectively for your advanced course projects.
Programmes this module appears in
| Programme | Semester | Classification | Qualifying conditions |
|---|---|---|---|
| Artificial Intelligence MSc | 2 | Compulsory | A weighted aggregate mark of 50% is required to pass the module |
| Artificial Intelligence with Industrial Practice MSc | 2 | Compulsory | A weighted aggregate mark of 50% is required to pass the module |
| Data Science MSc | 2 | Optional | A weighted aggregate mark of 50% is required to pass the module |
| Data Science (Conversion) MSc | 2 | Optional | A weighted aggregate mark of 50% is required to pass the module |
| Computer Vision, Robotics and Machine Learning MSc | 2 | Optional | A weighted aggregate mark of 50% is required to pass the module |
Please note that the information detailed within this record is accurate at the time of publishing and may be subject to change. This record contains information for the most up to date version of the programme / module for the 2026/7 academic year.