Week 11, 30 November – 4 December
Welcome to week 11, the last week of the course! This week we look at sampling-based approximate Bayesian inference and variational inference.
As usual your mark this week comes from: completing the discussion task (10%), attempting the in-note questions (20%), and this week’s assessed questions (70%). Full details on the assessments page including rules you must know.
Office hours:
You can meet us on MS Teams in the Meet-up channel of the MLPR 2020/21 Chat team on Friday at 9:30 AM and 4:30 PM UK time (GMT). One of Arno or Iain will be there. If you want to discuss something individually, please contact us by email: Arno aonken@inf.ed.ac.uk or Iain i.murray@ed.ac.uk.
Optional Event:
Ed Intelligence, the student-run machine learning society, has an event with a preview of talks from the upcoming NeurIPS conference followed by a gather-town social.
- Thursday 3 Dec 4pm-5pm GMT Mini NeurIPS
Here is what you need to do in Week 11:
Any catch-up: If there are any threads on hypothesis that didn’t get resolved (allow 48 hrs), email Arno and Iain.
Lecture notes: Work through the Week 11 notes, answering all the questions. It’s fine to make mistakes here, but an honest attempt at these by Friday at 4pm (UK time) is required.
Question sheet: Do the week 11 question sheet. This question sheet is assessed and forms the bulk of this week’s marks.
Tutorial group discussions: You should post at least one thing that you’d find it useful to go over with your group and/or tutor. Or, if you’re on top of everything, state in advance that you would be happy to answer questions from the group.
This is your final meeting with your tutor and group (although you are welcome to continue to meet with your group throughout the year if you like!). Now is a good time to ask about and discuss anything that you didn’t get to the bottom of across the whole course.
If you are happy with the technical material, please let us know why you are interested in machine learning. Are there any applications or real-world challenges that you think machine learning might help with? Can you cast any aspects of these problems as regression, classification, and/or feature-learning (as with autoencoders)?
One source of different applications (or cut down test-bed versions of them) are Kaggle competitions. Many of these are driven by corporate sponsors. Some are related to research problems and problems outside technology. For example the recent Cassava leaf disease classification is run by the AI & Data Science research group at Makerere University. Someone who took a previous version of this class used to lead that group. Our hope is that some of you too will lead a broad variety of use cases in the future. We look forward to seeing your ideas. If you don’t have any ideas yourself, please pick a Kaggle competition and say what you think baseline approaches to it could be.
Create a short summary of your conclusions for your tutor, probably as bullet points. This summary is (lightly) assessed! See the group instructions for details on how to submit the group discussion report.
We recommend that you aim to finish the questions (in the notes and question sheet) and submit your discussion report by the end of Thursday. We will assess only what you have submitted by 4pm UK time on Friday.
As in the Informatics late policy, extensions are not available for weekly hand-ins. We expect many students to miss or under-perform on one hand-in, and will discount the one with the lowest mark. If you experience more significant disruption to your studies, you may need to file special circumstances. Consult your Personal Tutor or Student Support team. Lecturers on a course cannot make allowances outside these procedures.