Dynamic algorithms address dynamically-evolving datasets, and show how to maintain solutions to problems of interest subject to small updates to the input much faster than re-computing these solutions from scratch after each update. For example, the dynamic matching problem asks us to maintain an approximate maximum matching under edge updates (additions deletions). For this well-studied problem we know numerous dynamic algorithms, the fastest of which are randomized. Unfortunately, until recently, all these fast randomized algorithms assumed that the updates to the input are generated in advance, rather than adaptively, based on the algorithm's random coin tosses. This assumption, referred to as the oblivious adversary assumption in the literature, rules out the black-box use of these algorithms for speeding up other (dynamic or static) algorithms. In this talk, I will present my recent framework for obtaining (essentially) the same randomized bounds previously known to be obtainable against an oblivious adversary, but this time against an adaptive adversary.