Skip to content

Modeling Processes of Neighborhood Change (MPONC)

Reference paper

@misc{mori2024modelingprocessesneighborhoodchange,
      title={Modeling Processes of Neighborhood Change}, 
      author={J. Carlos Martínez Mori and Zhanzhan Zhao},
      year={2024},
      eprint={2401.03307},
      archivePrefix={arXiv},
      primaryClass={cs.MA},
      url={https://arxiv.org/abs/2401.03307}, 
}

Setup

cd 25Sp-MPONC/modeling_processes_of_neighborhood_change_new
conda create -n mponc python=3.10.16
conda activate mponc
pip install -r requirements.txt
python main.py

Abstract

This research project simulates the impact of the Atlanta Beltline on the surrounding neighborhoods using no-regret-dynamics game theory. The simulation models agent movement across census tracts within Atlanta, GA's Fulton and Dekalb counties, with agents seeking to move optimally (prioritizing real-life incentives) based on several census tract attributes.

Intro and Description

This project is based on the reference paper created by Dr. Martinez and Dr. Zhao, which aims to address the following:

  • How does the layout of transportation infrastructure affect the demographics of nearby neighborhoods?
  • Does the creation of these infrastructure actually benefit everyone equally; is it fair?
  • Can we predict the effects on surrounding communities before these structures are actually built?

These questions are primarily motivated by the issue of gentrification, an issue prevalent in many major cities. We utilized concepts in game theory, more specifically no-regret dynamics, in order to simulate the effects of the Atlanta Beltline on gentrification. To summarize our approach with no-regret dynamics:

  • People, or 'agents', repeatedly select census tracts for relocation, from an initially uniform probability distribution. Computed from the tract's attributes, a 'cost' value is assigned to the taken action, and the agent negates this cost from the chosen tract's selection probability.
  • 'Cost' is a function of region affordability, upkeep, and attractiveness.
  • The higher the cost, the less likely an agent is to visit that census tract in the future.
  • This process is repeated until the probability distribution of visting census tracts converges - an equilibrium is reached, and agents have successfully 'learned' the attractiveness of each tract. Further actions make negligible difference to the probability distribution.
  • This semester, we have changed the way we compute simulation convergence by using the maximum best-action regret across all agents between two sliding windows of recent agent‑distributions:
\[ r = \frac{1}{T} \big(\sum_{t = 1}^{T} c_t(a_t) - \sum_{t = 1}^{T} c_t(a_t^*) \big) \]

where \(c_t(a_t)\) denotes the cost of action \(a_t\) and \(a_t^*\) is the optimal action in hindsight at timestep \(t\). If \(r ≤ \epsilon\) (default = 0.01) the system is deemed converged and the run halts automatically. All thresholds are configurable in config.py. - For practical purposes, an upper limit of \(T = \frac{4 \ln{A}}{\epsilon^2}\) timesteps is set for each simulation, where \(A\) is the size of the action space.

Cost Function

Every agent evaluates a tract with a cost defined as

cost = 1 – (affordability × upkeep x attractiveness)

Factor Scale Quick intuition
Affordability 0 or 1 1 if the tract still has room or the agent is selected from an income-weighted lottery when the tract is overpopulated; 0 otherwise.
Upkeep 0–1 How “maintained” and inhabited the tract is, based on the fraction of its resident capacity currently filled.
Attractiveness 0–1 How nice the tract is to live in, combining amenity access and community similarity (see sub-components below).

Attractiveness = upkeep × amenity_access × beltline_factor

Sub‑component Range What it captures
Amenity access 0–1 Density of key POIs (restaurants, shops, transit stops, etc.), spatially smoothed over neighbouring tracts and modified by BeltLine factor β.
Community 0–1 How well an agent’s income matches incomes in the surrounding tracts—closer ⇒ higher score.
BeltLine factor β ≥ 1 Extra accessibility for tracts in the BeltLine catchment area: β = B (max boost) on the BeltLine, tapering linearly down to 1 at radius R, and staying 1 outside R.

* Amenity list adapted from 24Sp‑Mobility‑Seg; we omit several tags such as “shed”, “guardhouse”, “ferry_terminal”, “garages”, and “bridge”.

Spatial Smoothing of Amenity Data

To ensure that the amenity component of a tract's Attractiveness score accurately reflects the true environment and facilitates realistic agent decision-making, we employ a spatial smoothing strategy. Direct raw counts from OpenStreetMap (OSM) frequently register zero amenities in certain census tracts. Allowing these zero values to persist would be misleading, as agents rarely perceive their neighborhood's amenities as strictly limited to their immediate tract boundary; instead, they consider nearby shops, parks, and transit.

To counteract this data sparsity and noise, we use Queen contiguity weights to define the neighborhood structure. This definition is inclusive, considering two tracts to be neighbors if they share any common border or a single corner point. Leveraging the PySAL library, we then compute a spatial lag on the raw amenity densities. Conceptually, this process calculates a weighted average for each tract's amenity score, blending its own raw density with the densities of all its Queen-contiguous neighbors. This produces a smoothed, more plausible amenity distribution across the region, which is essential for governing agents' relocation decisions and ultimately modeling gentrification dynamics accurately.

Implemented Amenities & weights (OpenStreetMap labels):

AMENITY_TAGS = {
      'amenity': ("bus_station|cafe|college|fast_food|food_court|fuel|library|restaurant|train_station|university|parking|school|hospital", 3),
      'shop': ("supermarket|food|general|department_store|mall|wholesale", 3),
      'landuse': ("residential|industrial|commercial|retail", 2)
}
* We operationalize β by giving tracts within 800 m of the BeltLine a +20 % boost to their Attractiveness score (β = 1.20/1.20); the boost then tapers linearly to +10 % at 1.6 km, and falls to β = 1.00/1.20 beyond that distance. This β is applied as an exponent: each tract’s amenity density is raised to the power \(1/β_c\). This provides a stronger relative boost to low-amenity tracts and milder gains for amenity-rich tracts, approximating the diminishing-returns effect of BeltLine’s observed proximity on nearby housing prices.

Community Score (Local Moran’s I)

Due to the difficulties with Moran's, we are using a simplified implementation of the Community Score calculation. Community ties \(\((Community_a(c))\)\) model socioeconomic affinity by measuring how closely a specific agent's endowment matches the endowments of nearby agents. For each agent, we compute a community endowment defined as the mean income of the tract the agent currently occupies along with all tracts adjacent to it. The model uses PySAL to calculate the average income of the tract and its neighbors for whichever year is selected.

The community endowment is computed as:

\[ \bar{E}_c = \frac{1}{N_c} \sum_{i \in \{c \cup \text{Neighbors}(c)\}} E_i \]

The agent’s community-tie value is then calculated as the difference between the agent’s own endowment and this community endowment:

\[ \mathrm{Community}_a(c) = E_a - \bar{E}_c \]

Agents whose incomes closely match the incomes of their surrounding tracts receive stronger community-tie scores, while those whose incomes differ substantially receive weaker ties. A closer match ⇒ value near 1 ⇒ lower cost.

For a deep explanation of how Moran's works, why it isn't implemented into the paper yet, and the next steps, please see the Spatial Autocorrelation portion of the readme.

Weighting amenity access vs community (λ)

A tunable parameter λ ∈ [0, 1] lets you emphasize either amenity access (high λ) or community match (low λ).
Internally we rewrite

cost = 1 - [[Affordability] × [Upkeep x (λ × AmenityAccess)] × [(1-λ) × Community]]

TIGER/Line Geodatabases shapefiles:

Alt text

Project status

Outputs & configuration

Our code outputs a GIF to visualize agent behavior over time. Each circle represents the centroid of a census tract - green signifying those in the Atlanta Beltline - and the encircled number is the agent population. Our code also outputs a CSV file containing all the simulated data at every individual timestep.

  • Data contained in CSV's: Census tract name, agent population, raw average income, average income reported by census, normalized average incomes, and amenity density.
  • Note: 'Timestep' refers to a single instance agent action (relocation); 20,000 timesteps mean the agents relocate a total of 20,000 times during the simulation.

GIF

This GIF shows the behavior of 1,000 agents up to 20,000 timesteps, frames being captured every 400 timesteps. Rho=1, alpha=0.25. Alt text

Runtimes

(1000 agents, 349 census tracts) - Simulation: 3 min * Graph, amenities, and centroids are cached after first build

Census-based approach

Our project utilizes US Census data in that: - The geographical regions our agents inhabit correspond directly to US census tracts (can correspond to any other census-defined geographic unit, i.e. zip codes, housing districts, and school districts). - Each 'agent' is assigned a 'wealth' value in our simulation. We create this distribution of wealth using Census data (population & median incomes), to represent real-world demographics.

Atlanta Beltline in our Simulation

We automate the process of labelling certain regions as 'within the Atlanta Beltline's catchment zone' by using commuting paths from OpenStreetMap that correspond to the Atlanta Beltline - namely, a bike trail and a railway. To experiment with a different beltline, such as a beltline that spanned across Atlanta horizontally, or simply expanded north by x miles, we would acquire the OpenStreetMap ID's of existing paths (bike trails, walking paths, roads, etc.) corresponding to our desired Beltline, and paste these into config.py. Alternatively, we can create a such path ourselves in OpenStreetMap. Then, any region containing segments of these trails would automatically receive a "Beltline Score" which boosts its perceived attractiveness by agents.

In config.py - bike trail and railroad OpenStreetMap ID's for the beltline are as follows:

""" Beltline 'relation' IDs from Open Street Map """
RELATION_IDS = [8408433, 13048389]
Bike Trail Railroad

Compare with Atlanta Beltline geography:

Adapting the Model to Other Cities

Although Atlanta serves as our case study, every pipeline stage—census shapefiles, OSM‑derived amenities, cost parameters, and even the BeltLine decision‑agent—can be swapped for a different region:

  1. Geometry & Demographics
    • Replace the Fulton/DeKalb TIGER/Line shapefiles with those of your target city.
    • Point the MEDIAN_INCOME_URL and POP_URL in config.py to that city’s American Community Survey "ACS" tables.

  2. Transit‑Ring Definition
    • Identify (or sketch in OSM) the planned loop / BRT corridor / rail spur you want to study, then list its OSM relation IDs in config.py.
    • The same β‑taper and DecisionAgent logic will assign accessibility boosts and density bonuses around the new corridor.

  3. Policy Levers
    • Tweak RHO_SCALAR to explore how strong the up‑zoning response should be for the above transit ring.

Because the simulation is purely data‑driven, you can rapidly prototype “what‑if” BeltLine analogues for anywhere with open census and OSM data while measuring potential community shifts/gentrification before shovels hit the ground. Example: Alt text * By changing the above URL, we get the following:

Alt text

Policy Scenarios: Vertical vs Horizontal Scaling

The simulation now supports two high-level policy experiments:

1. Vertical Scaling — Decision-Making Agent learns \(\kappa\)

A dedicated DecisionAgent treats how aggressively to expand housing capacity near the Beltline as a no-regret learning problem:

  • Action space: \( \kappa \in \{0.00, 0.01, \dots, 1.00\} \)

sampled each timestep by multiplicative‑weights (no‑regret-dynamics).

  • Base capacity curve:
U_c = 1 + \frac{\texttt{beltline\_score}_c - \texttt{BL\_LOW}}{\texttt{BL\_HIGH} - \texttt{BL\_LOW}} \times \bigl(\texttt{RHO\_SCALAR}_{max} - 1\bigr) 

Effective multiplier

\rho_c^{\text{new}}
  = \rho_c^{\text{base}}
    \times
    \left[1 + \kappa\,M_c\right]

where \(M_c\) is the maximum permissible percent increase for tract \(c\) based on distance to the Beltline.

Two alternative utility metrics guide learning:

UTILITY_METRIC Algorithm maximises Real‑world analogy
0 (default) average utility (mean well‑being across all agents) “Greatest good for the greatest number.”
1 minimum utility (well‑being of the worst‑off agent) Rawlsian / max‑min fairness.

The DecisionAgent reinforces actions that raise the chosen utility, gradually converging to an ideal m for the current policy goal.

These “concrete zoning‑bonus percentages and distance bands” are the literal numbers (+20 %, +10 %, 800 m, 1600 m) encoded in DecisionAgent.py. Feel free to edit them in config.py.

2. Horizontal Scaling (complete BeltLine from day 0)

All census tracts whose centroids fall inside the Beltline’s catchment radius begin with β_c > 1, following the same taper function used in the baseline model.
Unlike vertical scaling, capacities do not change—only the geography of Beltline-induced accessibility (amenity-attractiveness) changes.

This models a scenario where the entire Beltline loop has been completed.

Strengths and Weaknesses

Strengths

Our approach is very modularized. For instance, our code can easily be ran on other regions, with customizable 'Beltlines' and parameters. Furthermore, Our approach is backed by established human behavior approaches (no-regret dynamics) rather than pre-defined, non-adaptive agent behavior. We are also able to produce dynamic visuals (GIFs).

Weaknesses

Our simulation also assumes that there is no immigration/emigration in Atlanta, as we set a fixed number of agents which are able to exit, and hence re-enter, our defined area. We also limit transportation choices to cars and public transportation, despite other modes of transport being popular (walking or biking). Additionally, our runtimes may be relatively long due to the computationally expensive nature of the simulation (several minutes). Ideally, our simulation would be ran in just seconds.

Next Steps

We hope to improve the readability of our GIF's, improve the runtime of the simulation, and include additional visualizations of our results to better communicate our analysis during discussion. We also hope to run SOBOL sensitivity nalysis of our parameters once more.

Spatial Autocorrelation

This section serves as an explanation for the conceptual and applied framework for spatial autocorrelation in the MPONC simulation. I will begin by explaining the concepts necessary to understand spatial autocorrelation, specifically for Local Moran's I. It's important to understand that spatial autocorrelation is used to calculate the community score and if you are unsure what community score represents or the justification for it, then see the general readME. This readme is speficially for community score's implementation.


The Math

Local Moran's I

Spatial Autocorrelation is a geographic concept for finding clusters of similarity or dissimilarity in geographic data. The most common formula is Local Moran's I:

\[ I_i = \frac{(x_i - \bar{x})}{m_2} \sum_{j=1}^{n} w_{ij} (x_j - \bar{x}) \]

Where:

  • \(\( x_i \)\): value of the variable at location i (agent endowments)
  • \(\( \bar{x} \)\): mean of all \( x \) values (average endowments of all agents)
  • \(\( w_{ij} \)\): spatial weight between locations i and j (morans weights)
  • \(\( n \)\): total number of observations
  • \(\( m_2 = \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2 \)\): variance normalization term

\(\(I_i\)\) represents the value of morans for the specific agent. In our case, we can obtain the "community score" by calculating the local moran's value for that agent and normalizing this value to be between 0 and 1. While this formula is how Moran's I runs, our simulation uses esda.local_moran, instead of manually implementing the math. In order to do this, we need to form our own weights value and way of tracking endowments for each agent to give to esda. Importantly, the weights matrix is per census tract and represents the weights between census tracts, while agent endowments are per agent.

Neighbors

In the pure mathematical form of Local Moran's I, every node is compared with every other node in the set. However, this is computationally expensive and also causes odd behavior around edges (see Appendix A). The common alternate approach is to only consider neighbors. So if a census tract is touching 6 other census tracts, then that tract has 6 neighbors to compare with and effectively only those 6 neighbors impact the agent's community score. Another common approach is to use decay over the entire set of nodes. So every node is a “neighbor,” but the Moran’s weights reflect that closer neighbors have a stronger impact on the community of a node.

For our simulation we use a K-nearest neighbors approach, which sets a fixed set of neighbors - that we can configure for each simulation - and then uses decay. So if we set neighbors to 20, then the closest 20 nodes are neighbors with the closer ones counting more. We chose this because it’s much faster than a full decay (with full decay you have to iterate over 500 nodes for every moran’s calculation for each node), but reflects that people’s perception of their neighborhood extends further than 10-20 minutes away from them. The decay should remain high.

Normalization

Normalization is important for local morans because the raw values for morans can vary without range. They often are negative and they are not constrained to any range. Community score, however, does need to be constrained from 0 to 1. Therefore, we need a way to constrain our morans values to between 0 and 1.

Our original approach was to use z-score-to-cdf. This means we first take the morans values we calculate and perform a z-score normalization (this is standard in statistics) and then map those values to [0,1] using CDF which conceptually is like saying “this is the 90th percentile value in our simulation so the value is really 0.9”. The issue with this was that it causes more extreme values if the moran’s values have a low standard deviation.

The new approach is to use a direct mapping after z-score. The equation for that is simply:

\[ p_i = \frac{z_i - \min(z)}{\max(z) - \min(z)} \]

IMPORTANT, this is currently not reflected in the main branch of the code. We’ll need to update this if we go back to moran’s

The Simulation

Morans_analysis.py

Let’s go through the code one section at a time to review the design principles.

def _library_spatial_autocorrelation(self, city):
    # Prefer tract-level Moran's weights stored on the city (centroid-level).
    # If not present, build weights from cached distances below.
    tract_W_mat = getattr(city, "morans_weights", None)
    agent_endowments = city.agt_dows
    agent_positions = city.agent_positions

    # If city provides a tract-level numpy matrix, convert to libpysal W
    if isinstance(tract_W_mat, np.ndarray):
        if not np.isfinite(tract_W_mat).all():
            nan_positions = np.where(~np.isfinite(tract_W_mat))
            raise ValueError(f"Found non-finite values in tract-level weights matrix at positions: {nan_positions}")
        # build weights_dict expected by libpysal.W
        weights_dict = {}
        n_nodes = tract_W_mat.shape[0]
        for i in range(n_nodes):
            row = tract_W_mat[i]
            neigh = {j: float(row[j]) for j in range(n_nodes) if row[j] != 0}
            weights_dict[i] = neigh
        w_obj = W(weights_dict)
        w_obj.transform = "r"
        w = w_obj
    else:
        w = None

    if w is None:
        if self.cached_distances is None:
            self.cached_distances = np.load("neighbors.npy", mmap_mode="r")
            print(f"[MoransAnalyzer] Loaded cached centroid distances for {self.cached_distances.shape[0]} centroids")

        if self.cached_KNN is None:
            self.cached_KNN = np.argsort(self.cached_distances, axis=1)[:, 1:SPATIAL_K_NEIGHBORS + 1]
            print(f"[MoransAnalyzer] Cached KNN neighbors with k={SPATIAL_K_NEIGHBORS}")

        if getattr(self, "cached_W", None) is None:
            self._ensure_cached_W_from_distances()
        # --- Step 3: Align agent_endowments with weight matrix ---
        w = self.cached_W
Okay, let's review this. The library call accepts the entire city, which is not the most efficient, but otherwise we would need 4 or 5 input parameters from the city, so this is simpler. The code itself ensures that a spatial weights matrix (W) exists, which is essential for computing Local Moran’s I.

Spatial weights define which regions (or agents) are considered neighbors and how strongly they influence each other. The code first checks whether these weights already exist (as a tract-level NumPy matrix on the city object). If not, it dynamically builds them from cached distance data.

  1. Using Precomputed Tract Weights
  2. If the city object already provides a tract-level distance matrix (morans_weights):
  3. The code verifies that all values are finite (no NaNs or infinities).
  4. It converts the NumPy distance matrix into a Libpysal W object, the standard spatial weights format used by the PySAL ecosystem.
  5. Each tract’s row is converted into a dictionary mapping neighbor indices → weight values.
  6. Finally, the matrix is row-standardized (w_obj.transform = "r"), so each row sums to 1.
  7. This path is used when a ready-made Moran’s weights matrix is already available (for example, from prior preprocessing or caching).

  8. Building Weights from Cached Distances

  9. If the city object doesn’t have a precomputed weights matrix (w is None), the method builds one on the fly:

  10. It loads a precomputed centroid distance matrix from neighbors.npy, which stores pairwise distances between tracts.
  11. It constructs a k-nearest-neighbor (KNN) structure (self.cached_KNN), where each tract is connected to its SPATIAL_K_NEIGHBORS closest neighbors.
  12. If a cached Libpysal W object (self.cached_W) doesn’t exist yet, it calls _ensure_cached_W_from_distances() to create it.
  13. This helper computes inverse-distance weights, so closer tracts have stronger influence.
  14. The resulting matrix is also row-standardized.

After these steps, the method guarantees that a valid Libpysal W object (w) exists — ready for use in the Local Moran’s I calculation that follows.

values = np.array(agent_endowments, dtype=float)
        n_agents = len(values)
        n_nodes = w.n

        # Adjust sizes
        if n_agents < n_nodes:
            # pad missing nodes with mean value (not NaN to avoid broadcasting)
            mean_val = np.mean(values)
            padded = np.full(n_nodes, mean_val)
            padded[:n_agents] = values
            values = padded
        elif n_agents > n_nodes:
            # truncate if too many agents: average agents into tract-level values.
            # Use the simulation's canonical number of tracts (centroids) rather than
            # deriving it from agent_positions, which can change as agents move.
            num_tracts = len(getattr(city, "centroids", []))
            values = self._average_agents_by_tract(agent_endowments, agent_positions, num_tracts=num_tracts)

        # Replace NaNs with mean for stability
        mean_val = np.nanmean(values)
        values = np.nan_to_num(values, nan=mean_val)

        if len(values) != w.n:
            raise ValueError(f"Length mismatch: {len(values)} values vs {w.n} weights")
To ensure the endowment vector and the spatial weights matrix are compatible in size, the function performs the following adjustments:

Case 1: Fewer agents than spatial nodes If the number of agents (n_agents) is less than the number of spatial nodes (n_nodes), the missing nodes are padded with the mean endowment value.

Mathematically:

\[ values[i] = \begin{cases} agent\_endowment[i], & i < n\_{\text{agents}} \\ \bar{x}, & i \ge n\_{\text{agents}} \end{cases} \]

where x̄ is the mean of all agent endowments. This ensures every spatial node has a corresponding endowment value.

Case 2: More agents than spatial nodes If the number of agents (n_agents) exceeds the number of nodes (n_nodes), agents are aggregated by tract, and the mean endowment per tract is used.

Mathematically:

\[ tract\_value[j] = \frac{1}{|A_j|} \sum_{i \in A_j} endowment\_i \]

where Aⱼ is the set of agents that belong to tract j. Each tract’s endowment value becomes the average of all agents in that tract.

Case 3: Handle missing or invalid values Any missing (NaN) or undefined values are replaced by the overall mean value x̄ to ensure numerical stability and prevent errors during computation.

Final check After these adjustments, the endowment vector and the spatial weights matrix always have the same length:

\[ |values| = n_nodes = |W| \]

This guarantees proper alignment between endowments and spatial weights when computing Moran’s I.

# I need to check what 0 permutations actually means
        if USE_SIMPLIFIED_COMMUNITY_SCORE == 'morans':
            lisa = Moran_Local(values, w, permutations=0)
            results =  lisa.Is
        elif USE_SIMPLIFIED_COMMUNITY_SCORE == 'gearys':
            local_geary = Geary_Local(w, permutations=0)
            local_geary = local_geary.fit(values)
            results = local_geary.localG
        else:
            print("invalid use_simplified_community_score value")
            return None

        #Convert to NumPy array for consistency
        results = np.array(results, dtype=float)

        # Convert tract-level results to CDF-normalized scores
        tract_scores = self.zscore_to_cdf(results)  # shape (C,)

        # Map tract-level scores to per-agent scores so callers receive an (N,) array
        # (every agent in the same tract receives the same tract score).
        agent_positions = np.asarray(agent_positions, dtype=np.intp)
        if agent_positions.size == 0:
            # no agents present -> return empty array
            return np.asarray([], dtype=float)

        # Direct mapping: assume agent_positions are valid tract indices.
        agent_scores = tract_scores[agent_positions]
        return agent_scores

This is where the actual morans/gearys happens. Here we determine which Spatial Autocorrelation algorithm to use and then run that using the library. The '0 permutations' means the library only runs once. You can set a higher amount of permutations and then calculate means, medians, standard deviations, etc... but we choose not to do this because every permutation slows the simulation down. Then we make sure the data is in the right format and uses the correct normalization. Then when we 'map tract-level scores to per-agent scores', this ensures that the agents recieve the correct score for their track.

The issue with this is that all agents in the tract will recieve the same morans score. The library runs on a node basis not an agent basis, which means we have to manipulate the data to get the results we want. There's a few potential ways to convert tract-level morans to agent-level morans. Potentially, you can add a faux tract with the same distances as the real tract the agent is and then set the endowment of the faux tract to the endowment of the agent. The issue with this is it requires a loop for every agent which means if there's N agents, then you need to run the library-morans N times, which is far too slow to be useful.Another way is to not use the library and instead do the calculations manually, which leaves more error for potential coding issues and will also be slower than the regular library due to the library's optimizations.

Presentation

Team

Name Seniority Major School # Semesters GitHub Handle
Matthew Lim Junior Computer Science SCS 3 mlim70
Justin Xu Junior Computer Science SCS 2 JXU037
Devam Mondal Senior Computer Science SCS 3 Dodesimo
Nithish Sabapathy Senior Computer Science SCS 2 nithish101
Ian Baracskay Senior Computer Engineering ECE 1 ianBaracskay
Jason Tran Junior Computer Science SCS 1 JTran86