Blog

The Adversarial Classroom

Throughout my academic studies, I’ve taken a wide range of courses, both during my undergraduate studies in Iran and as a graduate student at Penn. What appeared to be a common theme among most of these classes was what I term the adversarial classroom system. Here, regardless of students’ interest in the subject matter, the primary motivation often revolves around maximizing grades. Homework completion, exam preparation, and class attendance primarily focus on achieving higher grades. Deadlines are set throughout the semester, and students are required to submit their homework before these deadlines, regardless of the time it might consume or the benefits derived from solving the questions.

The student and the teacher are seen as adversaries. As a student, the goal is to maximize my grade while the instructor aims to prevent that from happening 🙂

This behavior is seldom benign. In my experience, often, I found myself doing less because I knew I was already receiving a good grade. I over-optimized and found shortcuts, learning less while maintaining grades. The adversarial system is misaligned. A misaligned system pursues some objectives, but not the intended ones — at least not fully [1].

To address this issue, I propose the following — while I admit that I am no expert in pedagogy.

1- If the grade comprises n components, the instructor randomly chooses n numbers that sum to 100 and uses these numbers as the weight for the components. The instructor will never announce this. Students, unaware of these weights, will strive for excellence in every single component, fostering a holistic approach to learning.

2- No deadlines will be set for homework. Homework will be released throughout the semester and submitting only a fraction of them is enough; e.g., 70% of all released questions. The students get to select the questions that they find most beneficial and submit them by the semester’s end.

[1] Russell, Stuart J.; Norvig, Peter (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson. pp. 5, 1003.

Maximum of Sub-Gaussian Random Variables

In this note, we will first prove a bound on the expected maxima of a sequence of weighted sub-gaussian random variables. Next, we show an upper bound for the expected value of the maximum of a finite number of sub-gaussian random variables. Finally, we prove a high probability version of these results.

Old Slides

Here is a list of some of my old slides. Most of these were made when I was an undergraduate student or during Covid.

  • Information-Theoretic Analysis of Learning Algorithms [Slides]
  • Non-Parametric Least Squares [Slides]
  • LASSO is not Fully Bayesian [Slides]
  • Online Learning: What is Learnable? [for high school students] [Slides]
  • Subjective Theory of Probability: Dutch Book (de Finetti) Theorem [Slides]
  • Algorithmic Causal Inference [Slides]
  • Online Learning and Online Convex Optimization [with Mahdi Sabbaghi] [Slides]
  • Blind Separation of Nonlinear Mixtures of Stochastic Processes [Part 1][Part 2]
  • Nonlinear ICA of Temporally Dependent Stationary Sources [Slides]