Meeting Menu

2026 Spring Meeting – Student Talks

St. John Fisher University

Tentative Time Slots: Saturday, April 18 from 1:00-3:00 PM

Student talks will be 20-minute talks with 5 additional minutes for Q&A and 5 minutes for transition

Note on Classroom Technology for Contributed and Student Talks: There are computers and projectors in each room. Laptops can be easily connected via HDMI (no need to bring a cable). Laptops whose only output source is USB-C, however, will require an HDMI adaptor (not provided).

Student talk time slot assignments are currently tentative! Please reach out to the Program Chair if you need a certain time slot. 

MAA Seaway Section Guidelines for Speakers

MAA Seaway Section Guidelines for Session Moderators

Saturday – Apr 18

Location:

  1. Time:
    1:00 pm – 1:20 pm
    Title:
    Counting What Counts: Open-Source Intelligence Data and the Math Behind Better Statistics
    Speaker:
    Chloe Quinn (St. John Fisher University)
    Abstract

    How many refugee shelters are actually operating in the United States right now? The honest answer is that nobody knows for sure. Official counts rely on self-reporting, outdated surveys, and incomplete datasets. But the raw information to build better estimates already exists in the public domain: satellite imagery, open traffic camera feeds, and freely available geospatial data. This talk explores the mathematics behind turning that open-source intelligence into meaningful statistics. We’ll look at how spatial sampling, image classification, and Bayesian estimation can work together to produce population-level numbers that challenge or supplement the figures we’re asked to trust. The tools are accessible, the data is public, and the math connects directly to questions that shape policy and daily life. Whether your background is in pure math, statistics, computer science, or you’re just skeptical of a number you read in the news, this talk is for you.

  2. Time:
    1:30 pm – 1:50 pm
    Title:
    Extending Clustering Beyond Euclidean Spaces
    Speakers:
    Karthik Mattu (Rochester Institute of Technology), Adit Dhall, Thejas Nagesh Gowda
    Abstract

    The K-means algorithm implicitly assumes data in Euclidean spaces; but what happens when it doesn't? Data on spheres, hyperbolic spaces, tori, or mixed continuous-categorical domains each require distance metrics and centroid computations adapted to the underlying geometry. In this presentation we go over a series of self-contained computational experiments, comparing clustering algorithms across five geometric settings: Euclidean ($\mathbb{R}^n$ with Mahalanobis/Minkowski distances), spherical ($S^2$ with Haversine distance), hyperbolic (Poincar\'{e} disk), toroidal (periodic flat torus), and mixed-type (Gower distance). For each geometry, we implement both K-means (hard assignments) and Expectation--Maximization (EM) with geometry-appropriate mixture models---multivariate Gaussians, von Mises--Fisher distributions, wrapped normals, products of von Mises distributions, and Gaussian-categorical mixtures, respectively. Each experiment is built from scratch in Python with rich visualizations, emphasizing how the choice of metric reshapes cluster boundaries, centroids, and soft assignments. We highlight the key insight that K-means is a special case of EM, and show how this generalization interacts with non-Euclidean structures.

  3. Time:
    2:00 pm – 2:20 pm
    Title:
    Mathematics Behind Netflix Recommendations
    Speaker:
    Amanda Donovan (SUNY Geneseo)
    Abstract

    Recommender systems used by streaming platforms such as Netflix can be formulated as a matrix completion problem. User–movie ratings form a large sparse matrix in which most entries are unknown. A common approach is to approximate this matrix by a low-rank factorization, representing users and movies in a low-dimensional latent space. The rating matrix is approximated as a product of two smaller matrices describing hidden user preferences and movie attributes. These factors are computed by solving a least squares optimization problem using the Alternating Least Squares (ALS) algorithm, which iteratively updates user and movie vectors while minimizing reconstruction error over the observed ratings. We illustrate the method through a small numerical example computed in Mathematica, demonstrating how low-rank approximation can be used to estimate missing ratings and generate recommendations.

  4. Time:
    2:30 pm – 2:50 pm
    Title:
    Random Matrix Theory for Analyzing Structure in Trained Neural Networks
    Speaker:
    Gavin George (SUNY Geneseo)
    Abstract

    Random matrix theory is an extremely useful and remarkable field to study and eventually apply to a variety of different disciplines. The purpose of this paper is to give a rudimentary introduction to random matrix theory (RMT) and then demonstrate on a small scale how to show that neural networks, specifically perceptrons, learn after several iterations of input data. To show this, the initial weight matrix will be analyzed after each iteration, and the eigenvalues will yield valuable insight into how this matrix is changing. For neural networks, weight matrices are a key component towards producing more efficient machine learning models, and are generally at the heart of the optimization of neural networks. These weight matrices are originally constructed by generating each element from random distributions. For this project, the standard normal distribution will be the main focus for constructing the weight matrix of the two-layered neural net model. Due to the initial random structure of the weight matrices, when training data is applied, and the weight matrices are reconfigured, the eigenvalue distribution deviates from the Mar?enko-Pastur distribution, and a structure begins to form in the distribution. As the structure forms, it becomes evident that the initial randomness of the matrix is fading, and it is then clear that after several iterations, the neural network is learning.