Skip to main content

Bio: Xintong Wang is a postdoctoral fellow in the Harvard EconCS group hosted by David Parkes. She received her Ph.D. in April 2021 from the CSE Department at the University of Michigan, advised by Michael Wellman. Her research is centered around developing computational approaches to model complex agent behaviors and to design better market-based algorithmic systems, drawing on tools from machine learning, game theory, and optimization, with applications ranging from platform economies to financial systems. During her Ph.D., she worked as a research intern at Microsoft Research NYC (mentored by David Pennock) and J.P. Morgan AI Research (mentored by Tucker Balch). Prior to Michigan, she received her Bachelor’s degree from Washington University in St. Louis in 2015.

Talk Title: Market Manipulation: An Adversarial Learning Framework for Detection and Evasion

Talk Abstract: I will talk about our recently proposed adversarial learning framework that captures the evolving game between a regulator who develops tools to detect market manipulation and a manipulator who obfuscates actions to evade detection. The model includes three main parts: (1) a generator that learns to adapt a sequence of known manipulation activities to resemble patterns of a normal (benign) trading agent while preserving the manipulation intent; (2) a discriminator that differentiates the adversarially adapted manipulation actions from normal trading activities; and (3) an agent-based model that provides feedback in regard to the manipulation effectiveness and profitability of adapted outputs. We conduct experiments on simulated trading actions associated with a manipulator and a benign agent respectively. We show examples of adapted manipulation order streams that mimic the specified benign trading patterns and appear qualitatively different from the original manipulation strategy we encoded in the simulator. These results demonstrate the possibility of automatically generating a diverse set of (unseen) manipulation strategies that can facilitate the training of more robust detection algorithms.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallclosefacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass