Making packages, OLGs (again?), and Time Averaging Taxation Paper
Taking CS courses at CMU (especially 15-112) ingrained in me a few principles of OOP that are very relevant when it comes to building packages. While they may seem like very boring and redundant rules at times (such as using getter functions to get information from an object as opposed to directly accessing their attributes), there are obvious long-term benefits of following these practices. I cringe internally whenever I encounter something that goes against those practices (probably a subconscious response as a result of losing many homework points back in the day). I’ve realized more and more how hard it is to uphold that standard in the workforce, especially under time constraints or due to laziness in debugging, but also how much going against that standard deeply demotivates me. I’ve taken up a new commitment to adhere to what I consider to be satisfying coding no matter how long it takes.
Along those lines, however, I wanted to reflect on my style of taking on projects. I pride myself on being creative enough to think about new projects, but I have a tendency to get caught up in the initial process of ideation. In turn a lot of the projects fall to the wayside because I’m not slowing down. It feels like there is no time to just commit to one thing and work through all the grueling “un-fun” parts because before I can do that the next fun idea comes to mind and it feels more satisfying to think about that than it is to endure the other parts from before. This is a problem I have been pretty conscious about within the last year especially and I have made quite a bit of progress on it too (some of the problem is also due to a desire to reach some perfect point with all the projects I take on). It is also something that I want to focus on in the upcoming months before I have to shift focus towards PhD apps. I’ve made some action items to stay true to it (this blog is related to quite a few of them) and I’ll be sure to update the blog with my progress throughout the summer as well.
Now, we can talk about some stuff I have going on and also some other exciting things that I’ve done recently. Regarding the relevancy of building packages, now that my classes at NYU are over I have a bit more time to delve into all the projects and ideas that I’ve kept on the back burner for so long. A lot of these - like I’ve alluded to before - are related to building packages or just writing some code around different econ concepts that I’ve been introduced to since starting at the Fed such as structural spacial models or what I intend on working on first, OLGs and Job Searching. These two classes of models were the main points of discussion in Macro I at NYU and I find that often I grasp onto the material when I am able to translate the theory into code. I don’t want to spend incredible amounts of time on this, so the plan is to create a package-like environment, creating arbitrary models of both types and some supporting analysis code that produces some useful insights. I’m talking high level right now because I still need to pin down the specifics but I have begun coding the OLGs package and have already encountered an interesting problem.
Basically, I want my codebase to work with OLG models that don’t have obvious mathematical solutions. I don’t want it to be a calculator that just spits out the solutions to problems I already know how to solve; in this effort to make it more generalizable I’ve come upon a unique problem that comes up when trying to stay true to this environment where the individual agents are interacting with each other, that is, I need to make a mechanism or routine in the code by which agents undergo this interaction which is also mathematically equivalent to the cases I already know how to solve analytically. My initial reaction to this was to consider the setup of a simple OLG model with only households who want to trade consumption bonds (stage contingent or not, it doesn’t really matter for the sake of this discussion). In this case, agents who want to buy consumption bonds make a bid (the highest price they are willing to buy at) and agents who want to sell consumption bonds make an ask (the lowest price they are willing to sell at). From this point, there are several paradigms we can adopt to do the matching, which is also a feature we can embed into the codebase as well that users can choose. Before I get into a few of the matching algorithms, just to establish some language to speak about this, say that there are n many agents, bids \(\{x_1, x_2, ..., x_k\}\), and asks \(\{y_1, y_2, ..., y_j\}\). A match between any bid x and ask y is only possible when y < x, aka when the highest price the buyer is willing to buy at is higher than the lowest price the seller is willing to sell at. Here are a few methods that I thought of, there are pros and cons of each method that I’ve briefly thought about but some market design people would probably know a lot more than me. Keep in mind, when beliefs about the state space probabilities are homogenous amongst all agents and/or bond prices are not state contingent all these algorithms should yield the same result since everyone believes the prices of bonds to be the same.
Most “Consumer Surplus”: Ultimately we still have to make decisions here about matches between individuals where the bid-ask spread does not perfectly align. Do we defer to the seller, to the buyer, to some other convex combination of both of their offers? In any case, once we decide on a paradigm for deciding the ultimate price we can define consumer surplus as the sum of the differences xi-pi over all matches made indexed by $i$, where pi is the price as a function of some \(f(x_i,y_i)\). Now maximizing consumer surplus is a matter of reconciling this function f with some matching algo. For example, if the function just deferred to choosing the ask price then it would be optimal to match the highest bids with the lowest asks (at least in working out a few examples that’s what I can tell).
-
Most “Producer Surplus”: The same as above but we look at the maximization problem from the seller’s perspective.
- Lottery: A lottery system is simple; at its core, it just is a random matching between any possible bid-ask. That being said, there are multiple ways to enact this procedure in practice.
- We can index each of the buyers and sellers and then uniformly randomly choose one from each group and make the match if their offers are possible.
- I can imagine this method would create a bit more deadweight loss than others we have already described because there might be more people who don’t end up matching.
- Fixed Market Price: One other method is having one function that takes in all the bids and asks (\(f(x_1,...,x_k,y_1,...,y_j)\)) and spits out one price. Then everyone pays for the bond at that price.
Ultimately, I think this translation of the theory to implementation in practice speaks to the complications in how we implement the theory in the real world and how we need to pull from multiple disciplines (both vanilla macro and market design in this case) to bring a model to face the real world.
Finally in the theme of stuff that Sargent/Ljungqvist have taught me, three weeks back I went to the Carnegie-Rochester-NYU Conference on Public Policy (a few CMU professors that I’m close with also showed up which let me say it was an interesting experience going to a conference with past professors, the first time that I have been on the same side of the lecture hall as them) and Ljungqvist was presenting a paper by him Sargent, Holter, and Stepanchuk. Both the paper and Ljungqvist’s presentation were wholly inspirational to me: it gave me a clear picture of what the ideal paper that I want write is like and where I want to get in terms of my presentation of my work. It would be this macro paper where I motivate it with several key quotes/papers in the literature and then apply this new framework to a pervasive problem in society like taxes or social security - as the paper in question above did. Presentation-wise you could really tell the difference between someone as experienced and enigmatic about what they had created like Ljungqvist vs. some of the other younger economists earlier in their careers.
Anyways, I forgot to mention this last week but now that I don’t have any more classes posts will be weekly; it may not be an in depth quantitative look into some topic because those take a while to work out but I will talk about something (a few rants about food coming up as well).