<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Zhibo Wang &#8211; VISION AND IMAGE PROCESSING (VIP) RESEARCH GROUP</title>
	<atom:link href="https://vip.uwaterloo.ca/author/zhibowang/feed/" rel="self" type="application/rss+xml" />
	<link>https://vip.uwaterloo.ca</link>
	<description>The University of Waterloo&#039;s Vision and Image Processing Lab</description>
	<lastBuildDate>Fri, 27 Mar 2026 15:36:34 +0000</lastBuildDate>
	<language>en-CA</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7</generator>

 
	<item>
		<title>Time-lapse microscopy of microbial populations</title>
		<link>https://vip.uwaterloo.ca/time-lapse-microscopy-of-microbial-populations/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 15:36:00 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4471</guid>

					<description><![CDATA[Prof. Brian Ingalls March 27th, 2026 &#8211; 12:00-1:00 pm, EC4-2101A Traditionally, most studies of microbes involve bulk population measurements of, e.g. well-mixed populations grown in culture tubes. However, spatial distribution is known to play an important role in microbial ecology. Our group has developed a pipeline to characterize ecological interactions among microbial populations from time-lapse [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Prof. Brian Ingalls</p>



<span id="more-4471"></span>



<p>March 27th, 2026 &#8211; 12:00-1:00 pm, EC4-2101A</p>



<p>Traditionally, most studies of microbes involve bulk population measurements of, e.g. well-mixed populations grown in culture tubes. However, spatial distribution is known to play an important role in microbial ecology. Our group has developed a pipeline to characterize ecological interactions among microbial populations from time-lapse microscopy at single cell resolution. We collect images of mixed populations of ~1000 cells constrained to two dimensions, at a frequency of 3-5 minutes. Image processing (segmentation and object tracking) is implemented with CellProfiler and our custom software package TrackRefiner. We then use processed image data to calibrate agent-based models of cellular activity. These models allow us to assess hypotheses regarding cellular behaviour and build toward predictive model-based design of interventions in natural and engineered microbial systems.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ziyao Shang</title>
		<link>https://vip.uwaterloo.ca/ziyao-shang-2/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 19:05:46 +0000</pubDate>
				<category><![CDATA[Alexander Wong]]></category>
		<category><![CDATA[Current Students]]></category>
		<category><![CDATA[Ph.D.]]></category>
		<category><![CDATA[Sirisha Rambhatla]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4453</guid>

					<description><![CDATA[Ziyao is a PhD student in Systems Design Engineering, co-supervised by Prof. Sirisha Rambhatla and Prof. Alex Wong. Ziyao is a member of the Critical ML Lab.]]></description>
										<content:encoded><![CDATA[
<p>Ziyao is a PhD student in Systems Design Engineering, co-supervised by Prof. Sirisha Rambhatla and Prof. Alex Wong. Ziyao is a member of the <a href="https://sirisharambhatla.com/criticalml/">Critical ML Lab</a>.</p>


<div class="lazyblock-supervisors-1LcwVU wp-block-lazyblock-supervisors"><link rel='stylesheet' href='https://fonts.googleapis.com/css?family=Source+Serif+Pro'>
  <div style='margin-bottom: 0.6rem; font-family: Source Serif Pro, Georgia, Times New Roman, serif; font-size: 3rem; font-weight: bold;'>Supervisors</div><a href=https://vip.uwaterloo.ca/a-wong/>Alexander Wong</a>, Sirisha Rambhatla</div>

<div class="lazyblock-publications-2gpksO wp-block-lazyblock-publications"><meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.3/dist/css/bootstrap.min.css" rel="stylesheet">
  <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.3/dist/js/bootstrap.bundle.min.js"></script>
  <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Source+Serif+Pro">

  <!-- Load external CSS styles -->
  <link rel="stylesheet" href="../stylesbootstrap.css">

<style>

#peoplePublications {
    font-family: "Source Serif Pro", "Georgia", "Times New Roman", "serif";
    font-weight: bold;
    font-size: 3rem;
    text-align: start;
    margin-bottom: 0.6em;
}

#peoplePublications ~ span {
    font-family: "Source Serif Pro", "Georgia", "Times New Roman", "serif";
    font-weight: bold;
    font-size: 1.75rem;
    text-align: start;
    margin-bottom: 0.5em;
}

#nav {
    text-align: start;
    font-family: "Source Serif Pro", "Georgia", "Times New Roman", "serif";
    margin-bottom: 0.5em;
    margin-left: 0;
    padding-left: 0;
}

#nav a {
    text-decoration-line: underline;
}

#nav a:hover {
    text-decoration-line: none;
}

#mainContent {
    max-width: 100%;
}

#pubDataJournals {
    font-family: "Source Serif Pro", "Georgia", "Times New Roman", "serif";
    padding-left: 0;
    font-size: 1.75rem;
    white-space: pre-wrap;
}

#pubDataConference {
    font-family: "Source Serif Pro", "Georgia", "Times New Roman", "serif";
    padding-left: 0;
    font-size: 1.75rem;
    white-space: pre-wrap;
}
</style>

  <!--Main Content-->
  <div class="container mt-5" id="mainContent">
  
   <div class="row">
      <div class="col ps-0" id="peoplePublications">Publications</div>
      <div id="nav">
        <a href="#journalArticles">Journal Articles</a>
        <span> / </span>
        <a href="#conferencePapers">Conference Papers</a>
      </div>
      <span id="journalArticles" class="ps-0">Journal Articles</span>
      <p id="pubDataJournals">
        <!-- journal data from JS here -->
      </p>
      <span id="conferencePapers" class="ps-0">Conference Papers</span>
      <div id="nav">
        <a href="#peoplePublications">Top</a>
      </div>
      <p id="pubDataConference">
        <!-- conference paper data from JS here -->
      </p>
    </div>
  </div>

<script>
	  let pubDataJournals = "";
	  let pubDataConference = "";
    let publications = [];
    const apiID = "https://ecserv2.uwaterloo.ca/researchmicro/research/reverseauthor.php?scopus_id="
    const api = "https://ecserv2.uwaterloo.ca/researchmicro/research/publications.php?user=";
    const openAccess = "https://bg.api.oa.works/find?id=";
    let userID;
    getNexus(57216587254);

    async function getNexus(scopusID)
    {
        let userInfo = await fetch(apiID+scopusID);
        let userInfoText = await userInfo.text();
        if(userInfoText == "Sorry, you do not have a Scopus ID assigned")
        {
          document.getElementById('peoplePublications').style.display = "none";
          document.querySelectorAll('[id="nav"]')[0].style.display = "none";
          document.querySelectorAll('[id="nav"]')[1].style.display = "none";
          document.getElementById('journalArticles').style.display = "none";
          document.getElementById('conferencePapers').style.display = "none";
          document.getElementById('pubDataJournals').style.display = "none";
          document.getElementById('pubDataConference').style.display = "none";
        }
        else
        {
          userID = JSON.parse(userInfoText).rows.nexus;
          displayPublications();
        }
    }

    async function getOA(searchQuery)
    {
        let openInfo = await fetch(openAccess + searchQuery);
        let openInfoText = await openInfo.text();
        return JSON.parse(openInfoText).url;
    }

    async function getPublications(file) {
        let publicationData = await fetch(file);
        let pubText = await publicationData.text();
        pubText = pubText.replace("=", ":"); //correcting API issue with = instead of :
        return JSON.parse(pubText);
    }

    function generateLink(id, title)
    {
        id.onclick = "";
        title = title.replaceAll(/ /g, '%20');
        id.innerHTML = "loading..."
        getOA(title).then(
            function(value)
            {
                if(value == null)
                {
                    id.innerHTML = "Search UWaterloo Library";
                    id.href = 'https://ocul-wtl.primo.exlibrisgroup.com/discovery/search?query=any,contains,' + title + '&tab=OCULDiscoveryNetwork&search_scope=OCULDiscoveryNetwork&vid=01OCUL_WTL:WTL_DEFAULT&lang=en&offset=0';
                    id.target = "_blank";
                }
                else
                {
                    id.href = value;
                    id.target = "_blank";
                    id.innerHTML = "Open";
                }
            },
            function(error)
            {
                id.href = "#";
                id.innerHTML = "Not found";
            });
    }

    function isConference(publication)
    {
    	return publication.volume == 0 || publication.pub_name.includes("Conference") || publication.pub_name.includes("Proceedings") || publication.pub_name.includes("Lecture Notes") || publication.pub_name.includes("Symposium");
   	}

   function displayPublications() {
	    getPublications(api+userID).then(
            function(value) {
                const size = value.rows.length;
                let pubListJournals = "";
                let pubListConference = "";
                for(var i = 0; i < size; i++)
                            {
                                let publication = "";
                                let authors = value.rows[i].list_names_of_authors.split(", ");
                                lastIndex = authors.length - 1;
                                authors[lastIndex] = authors[lastIndex].slice(4, authors[lastIndex].length - 1);
                                let possibleSupervisors = ["Clausi D.", "Fieguth P.W.", "Fieguth P.", "Wong A.", "Zelek J.", "Xu L.", "Scott A.", "Rambhatla S.", "Lee J.", "Chen Y.", "Shafiee M.J."];
                                if(authors.some(r=>possibleSupervisors.includes(r)))
                                {
                                for(var j = 0; j <= lastIndex; j++)
                                {  
                                    let authorLink = "";
                                    let authorsLC = authors[j].toLowerCase();
                                    if(j == lastIndex)
                                    {
                                        if(authorsLC.includes("."))
                                        {  
                                            authorLink += authorsLC.charAt(authorsLC.indexOf(".") - 1);
                                            authorLink += "-";
                                            authorLink += authorsLC.slice(0, authorsLC.indexOf(" "));
                                        }
                                        else
                                        {
                                            authorLink += authorsLC.charAt(authorsLC.length - 1);
                                            authorLink += "-";
                                            authorLink += authorsLC.slice(0, authorsLC.indexOf(" "));
                                        }
                                        
                                    }
                                    else
                                    {
                                        authorLink += authorsLC.charAt(authorsLC.indexOf(".") - 1);
                                        authorLink += "-";
                                        authorLink += authorsLC.slice(0, authorsLC.indexOf(" "));
                                        
                                    }
                                    authorLink = 'https://vip.uwaterloo.ca/' + authorLink;
                                    if(j != lastIndex)
                                    {
                                        publication += `<a href='${authorLink}' target='_blank'>${authors[j]}</a>` + ", ";
                                    }
                                    else 
                                    {
                                        publication += "and " + `<a href='${authorLink}' target='_blank'>${authors[j]}</a>`;
                                    }
                                }

                                publication += ', "';
                                
                                publication += value.rows[i].title;
                                
                                publication += '", ';
                                publication += value.rows[i].pub_name;
                                if (!isConference(value.rows[i]))
                                {
                                    publication += ", vol. ";
                                    publication += value.rows[i].volume;
                                    publication += ", ";
                                }
                                if (value.rows[i].page_range != "" && !isConference(value.rows[i]))
                                {
                                    publication += "pp. ";
                                    publication += value.rows[i].page_range;
                                    publication += ", ";
                                }
                                else if(isConference(value.rows[i]))
                                {
                                    publication += ", ";
                                }
                                publication += value.rows[i].year;
                                publication += ". ";
                                publication += `<a href="#" onclick="generateLink(this, '${value.rows[i].title}');event.preventDefault();">Get it here.</a>`;
                                
                                publication += "\n\n";
                                if (isConference(value.rows[i]))
                                {
                                    pubListConference += publication;
                                }
                                else
                                {
                                      pubListJournals += publication;
                                }
                                }
                            }
                document.getElementById('pubDataJournals').innerHTML = pubListJournals;
                document.getElementById('pubDataConference').innerHTML = pubListConference;
                if(pubListConference == "")
                {
                   document.getElementById('conferencePapers').style.display = "none";
                }
                if(pubListJournals == "")
                {
                  document.getElementById('journalArticles').style.display = "none";
                }
            },
            function(error) {document.getElementById('pubDataJournals').innerHTML = "Error retrieving data.";}
        )
    }

   
</script></div>

<div class="lazyblock-research-interests-Z7NtjL wp-block-lazyblock-research-interests"><link rel='stylesheet' href='https://fonts.googleapis.com/css?family=Source+Serif+Pro'>
  <div style='margin-bottom: 0.6rem; font-family: Source Serif Pro, Georgia, Times New Roman, serif; font-size: 3rem; font-weight: bold;'>Research interests</div>Research interests
Biomedical Imaging, Image Segmentation/Classification</div>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Data-Driven Modeling of Physical Systems: From Materials to Human Movement and Interaction</title>
		<link>https://vip.uwaterloo.ca/data-driven-modeling-of-physical-systems-from-materials-to-human-movement-and-interaction/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 14:42:36 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4430</guid>

					<description><![CDATA[Prof. Arash Arami March 20th, 2026 – 12:00-1:00pm, EC4-2101A Understanding complex physical systems—from engineered materials to human movement—requires models that capture structure, variability, and cross-scale interaction. In this talk, I will present our group’s work on data-driven modeling across manufacturing, biomechanics, and human–robot interaction. I will first highlight the use of generative and image-based models [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Prof. Arash Arami</p>



<span id="more-4430"></span>



<p>March 20th, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>Understanding complex physical systems—from engineered materials to human movement—requires models that capture structure, variability, and cross-scale interaction. In this talk, I will present our group’s work on data-driven modeling across manufacturing, biomechanics, and human–robot interaction. I will first highlight the use of generative and image-based models to synthesize realistic microstructures and predict material properties, enabling exploration and optimization. I will then present approaches for modeling and predicting human gait, including detection and prediction of pathological behaviors such as freezing of gait in Parkinson’s disease. Finally, I will describe learning-based methods for real-time movement estimation and human–robot interaction in wearable robotics, illustrating how these models support adaptive and personalized control.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Computer Vision and Machine Learning for Electric Hydrofoiling Vessels</title>
		<link>https://vip.uwaterloo.ca/computer-vision-and-machine-learning-for-electric-hydrofoiling-vessels/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 14:11:07 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4426</guid>

					<description><![CDATA[April Blaylock March 13th, 2026 – 12:00-1:00pm, EC4-2101A ENVGO is a Waterloo-based startup developing electric hydrofoiling boats that “fly” above the water to dramatically reduce drag and enable efficient, zero-emission marine transportation. In this talk, ENVGO staff will discuss how computer vision and machine learning are being integrated into their development. Presenter April Blaylock (UW [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>April Blaylock</p>



<span id="more-4426"></span>



<p>March 13th, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>ENVGO is a Waterloo-based startup developing electric hydrofoiling boats that “fly” above the water to dramatically reduce drag and enable efficient, zero-emission marine transportation. In this talk, ENVGO staff will discuss how computer vision and machine learning are being integrated into their development. Presenter April Blaylock (UW Mechanical Engineering B04, M07) is a co-Founder who leads the AI, computer vision, and autonomy development.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Introduction of Geomate</title>
		<link>https://vip.uwaterloo.ca/introduction-of-geomate/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 14:53:29 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4345</guid>

					<description><![CDATA[Lily de Loe and Nastaran Saberi March 6th, 2026 – 12:00-1:00pm, EC4-2101A &#160;GeoMate is an AI-driven mapping company transforming how cities and autonomous systems see the world. Our vision pipeline automatically extracts roads, lanes, and urban features from aerial and satellite imagery to create simulation-ready HD maps and dynamic mobility environments. These maps power applications [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Lily de Loe and Nastaran Saberi</p>



<span id="more-4345"></span>



<p>March 6th, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>&nbsp;GeoMate is an AI-driven mapping company transforming how cities and autonomous systems see the world. Our vision pipeline automatically extracts roads, lanes, and urban features from aerial and satellite imagery to create simulation-ready HD maps and dynamic mobility environments. These maps power applications in autonomous driving, delivery robotics, and smart-city planning, enabling large-scale, accurate, and continuously updated digital twins of the real world.&nbsp; This presentation will introduce GeoMate&#8217;s problem space and demo our RealSimE platform, which delivers simulation-ready maps for autonomous driving and advanced driver assistance system (ADAS) validation. Additionally, we&#8217;ll outline real-world challenges and approaches to using remote sensing data for computer vision applications.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Is calcium from astronauts&#8217; bones deposited in arterial walls? A CT investigation.</title>
		<link>https://vip.uwaterloo.ca/is-calcium-from-astronauts-bones-deposited-in-arterial-walls-a-ct-investigation/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 27 Feb 2026 18:00:00 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4352</guid>

					<description><![CDATA[Prof. Richard Hughson February 27th, 2026 – 12:00-1:00pm, EC4-2101A Rapid bone loss in older adults is associated with calcification of the medial layer of large arteries. Does the same process happen in astronauts? Nine astronauts in the Vascular Calcium study from the University of Waterloo were imaged with coronary CT scans and high resolution peripheral [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Prof. Richard Hughson</p>



<span id="more-4352"></span>



<p>February 27th, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>Rapid bone loss in older adults is associated with calcification of the medial layer of large arteries. Does the same process happen in astronauts? Nine astronauts in the Vascular Calcium study from the University of Waterloo were imaged with coronary CT scans and high resolution peripheral computed tomography (HR pQCT) of the wrist and ankle before and after ~6 months in space. Our group is in the early stages of analysis and looking for insights into the best methods to analyze these images.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Perception to Interaction: Physically Grounded Representations for Object and Scene Modeling</title>
		<link>https://vip.uwaterloo.ca/from-perception-to-interaction-physically-grounded-representations-for-object-and-scene-modeling/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 18:00:00 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4354</guid>

					<description><![CDATA[Prof. Yuhao Chen February 20th, 2026 – 12:00-1:00pm, EC4-2101A Current computer vision algorithms excel at recognizing static, rigid objects under controlled conditions but often struggle when faced with occlusion, breakage, or topology changes—for example, tracking per-bite portion changes while eating a salad. These challenges expose a fundamental limitation: many visual systems operate on single-view appearance [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Prof. Yuhao Chen</p>



<span id="more-4354"></span>



<p>February 20th, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>Current computer vision algorithms excel at recognizing static, rigid objects under controlled conditions but often struggle when faced with occlusion, breakage, or topology changes—for example, tracking per-bite portion changes while eating a salad. These challenges expose a fundamental limitation: many visual systems operate on single-view appearance without explicitly modeling geometry or physical scale, making their predictions unstable for reconstruction and unreliable for interaction.</p>



<p>In this talk, I present a progression of my works that expand visual representation across temporal and spatial dimensions toward physically grounded object and scene models. I begin with monocular and video-based methods, where dense temporal tracking improves short-term consistency but remains limited in spatial association and sparse temporal coherence. Moving beyond temporal cues alone, my work incorporates multi-view geometric reasoning, showing how aggregating geometry across viewpoints improves downstream visual tasks such as tracking. Registering observations into a shared metric scale provides a unifying reference that links sparse temporal measurements into a consistent geometric framework across frames and viewpoints, enabling more reliable analysis of temporal change at both object and scene levels. When objects undergo cutting or structural transformation, topology changes and interior exposure challenge surface-based assumptions. To address this, I develop interior-consistent 3D generation techniques that produce realistic interior visualizations while preserving geometric coherence during structural change. Up to this stage, the focus is on passively acquiring geometry and object trajectories grounded in physical space. I then extend these representations toward interaction, demonstrating how physically scaled geometric object modeling and large-scale synthetic supervision improve robotic grasp prediction and generalization. In parallel, I investigate learned latent representations in generative models, analyzing how implicitly structured priors can be examined and manipulated to better understand and control visual structure. This complementary line of work explores how structure emerges in data-driven representations alongside explicit geometric modeling.<br><br>Together, these contributions illustrate a geometry-centered expansion of visual representation—from single-view appearance to temporally consistent understanding, metric 3D geometry, topology-aware object modeling, and interaction—toward structured object and scene models that support reasoning and action in physical environments.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tracing the Emergence of Symbol Grounding in (Multimodal) Language Models</title>
		<link>https://vip.uwaterloo.ca/tracing-the-emergence-of-symbol-grounding-in-multimodal-language-models/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 18:00:00 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4356</guid>

					<description><![CDATA[Prof. Freda Shi February 13th, 2026 – 12:00-1:00pm, EC4-2101A Do language models acquire symbol grounding in Harnad’s (1990) sense, that is, non-arbitrary, causally meaningful links between symbols and referents? To answer this question, we first introduce a controlled evaluation framework that assigns each concept two distinct tokens: one appearing in non-verbal scene descriptions and another [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Prof. Freda Shi</p>



<span id="more-4356"></span>



<p>February 13th, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>Do language models acquire symbol grounding in Harnad’s (1990) sense, that is, non-arbitrary, causally meaningful links between symbols and referents? To answer this question, we first introduce a controlled evaluation framework that assigns each concept two distinct tokens: one appearing in non-verbal scene descriptions and another in linguistic utterances&#8212;this &#8220;pseudo multimodal&#8221; setup prevents trivial identity mappings and enables direct tests of grounding. Behaviorally, we find that models trained from scratch show consistent surprisal reduction when the linguistic form is preceded by its matching scene token, relative to matched controls, and co-occurrence statistics cannot explain this effect. Mechanistically, saliency flow and tuned-lens analyses converge on the finding that grounding concentrates in middle-layer computations and is implemented through a gather-and-aggregate (G&amp;A) mechanism: earlier heads gather information from scene tokens, while later heads aggregate it to support the prediction of linguistic forms. The phenomenon is replicated in visual dialogue data and across architectures with explicit memory (including Transformers and state-space models), but not in unidirectional LSTMs. Together, these results provide behavioral and mechanistic evidence that symbol grounding can emerge in autoregressive LMs, while delineating the architectural conditions under which it arises. Time permitting, I will introduce a recent toolkit developed in our lab to analyze general-purpose vision-language models. </p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ice Hockey Puck Detection and Tracking from Broadcast Footage</title>
		<link>https://vip.uwaterloo.ca/ice-hockey-puck-detection-and-tracking-from-broadcast-footage/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 06 Feb 2026 18:00:00 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4358</guid>

					<description><![CDATA[Liam Salass February 6, 2026 – 12:00-1:00pm, EC4-2101A Accurately tracking the hockey puck in broadcast video remains a challenging problem due to its small size, extreme motion, frequent occlusions, and perspective distortion. In this&#160;seminar, I present a unified approach to puck detection and tracking that moves beyond appearance-based methods by incorporating contextual cues, stable rink [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Liam Salass</p>



<span id="more-4358"></span>



<p>February 6, 2026 – 12:00-1:00pm, EC4-2101A</p>



<p>Accurately tracking the hockey puck in broadcast video remains a challenging problem due to its small size, extreme motion, frequent occlusions, and perspective distortion. In this&nbsp;seminar, I present a unified approach to puck detection and tracking that moves beyond appearance-based methods by incorporating contextual cues, stable rink geometry, and physics-aware motion modelling.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Visit to Accelerator Center</title>
		<link>https://vip.uwaterloo.ca/visit-to-accelerator-center/</link>
		
		<dc:creator><![CDATA[Zhibo Wang]]></dc:creator>
		<pubDate>Fri, 30 Jan 2026 19:00:00 +0000</pubDate>
				<category><![CDATA[Seminars]]></category>
		<guid isPermaLink="false">https://vip.uwaterloo.ca/?p=4360</guid>

					<description><![CDATA[AC staff January 30, 2026 – 1:00-2:00pm, Accelerator Center Going to the Accelerator Center (“AC”), a startup incubator on north campus.  AC staff will go through the startup ecosystem, their role in it, and the programs that they run. The AC staff will also provide a short walking tour of the space to give everyone [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>AC staff</p>



<span id="more-4360"></span>



<p>January 30, 2026 – 1:00-2:00pm, Accelerator Center</p>



<p>Going to the Accelerator Center (“AC”), a startup incubator on north campus.  AC staff will go through the startup ecosystem, their role in it, and the programs that they run. The AC staff will also provide a short walking tour of the space to give everyone an idea of what they offer.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
