History of Logic
 Aristotle's Modal Syllogistic
 I am trying to find adequate (probably relational) semantics for the apodeictic (or necessity) fragment of Aristotle's modal syllogistic. My main inspiration has been the fog surrounding Aristotle's modal syllogistic and Justin Vlasits' socalled divisional semantics for Aristotle's assertoric syllogistic (unpublished). My task is to extend Vlasits' semantics to the apodeictic fragment of Aristotle's modal syllogistic and then to prove its adequacy with respect to McCall's axiomatization of the modal syllogistic.
►
This is what I have so far.
Among contemporary Aristotle scholars specializing in his logic I find Marko Malink's work (2006, 2013) especially inspiring. A review and a critique of Malink's (as well as of two other) semantical frameworks can be found in a nice 2010 article by Sarah Uckelman & Spencer Johnston which appeared in Volume 8 of the prestigious AiML journal.
 The Algebraic Tradition

I am interested in the socalled 'algebraic tradition' in logic going back to the work of Leibniz, Boole, De Morgan, Peirce, Schröder and others. There are here several projects that I would like to explore in the next few years. One is the treatment of traditional (or Aristotelian) logic by Leibniz, Boole, De Morgan and others. The problem of the classification and comparison of Leibniz's many logics and those of Boole from his two treatises seems important. Specifically, it would be nice to have a precise characterization of the relative expressiveness of Leibniz's and Boole's logics. Is it true, as Wolfgang Lenzen has suggested, that there is nothing in Boole's logic that wasn't already in Leibniz's? No less interesting is the question of the relative strengths and weaknesses of the PeirceSchröeder logics and Frege's Begriffsschrift system.
Currently I'm reading a very interesting book by Calixto Badesa called The Birth of Model Theory (here is a helpful review of it by Jeremy Avigad). I haven't gotten very far into it; reading about the systems of Boole, Peirce, and Schröder I have started exploring the primary sources and C.I. Lewis' wonderful A Survey of Symbolic Logic. This latter is a bit outdated, but still helps to understand the various systems described in Badesa's book. The high point of the book will be the treatment of Löwenheim's celebrated 1915 result.
Carnap
 Modal Logic

Although I am interested in all aspects of Carnap's work, currently I am preoccupied with his work on modality. Specifically his approach to firstorder modal logic (1946). Carnap's system roughly corresponds to Lewis' S5, but there are some important differences. The first question that would be nice to answer is what those differences are exactly? I.e., where in the zoo of modal logics does Carnap's belong? Are there any differences in the 1946 and Meaning & Necessity systems, and if yes, what are those differences? It is wellknown that Barcan independently invented firstorder modal logic. It would not be uninteresting to compare both the logics of Carnap and Barcan and their methodologies.
 BarHillel's Bible

In my current reading list the second place is given to Carnap's Logical Syntax of Language (had to give the priority to Poincaré's Science and Hypothesis). I was initially interested in Carnap's methodology of explication, so naturally I started reading the book for the bits on tolerance and on the permissibility of certain mechanisms in logics. But last year I got my hands on Wagner 2009 and since then have been trying to get through LSL. Wagner's introduction to the volume was helpful, but Carnap's notation has turned the reading of LSL into a deciphering process. Over the last few months I've been trying to come up with a coherent way of deFrakturing Carnap's formulas; I have decided to use colors to distinguish the levels instead of typefaces. While I haven't found my translations useful, in the process of coming up with them I have become wellversed in Carnap's notation.
Currently I am struggling with the General Syntax part, but once I get through that, I will be done with my first reading of LSL because I have read the last part twice before. He does say in an appendix to his Introduction to Semantics that many of the things he defined in the General Syntax part syntactically would now need to be given semantical explications (the socalled Lconcepts such as truth and so on). As you know, BarHillel was obsessed with LSL (he would carry it around with him all the time, so his students started calling it 'BarHillel's Bible'). I like BarHillel and he thought the world of the General Syntax part, so I use that as extra motivation to take part iv very seriously.
Going back to the notation issue, I have tried to translate Carnap's recursive definitions into BNF format and largely it has gone smoothly. But there is one problem. Carnap has recursively defined syntactic categories which I have translated into Extended BNF, but I'm not sure if I've done it correctly. When I make my notes available here you can check if it makes sense (comments will be much appreciated).
 Bibliography
 I'm in the process of extending Arthur Benson's 1960 bibliography of Carnap's works (this can be found in the back of Carnap's Schilpp volume). I have already populated a (mySQL) database with all the information, and now I'm trying to figure out a neat way of presenting all the data. Benson gave entries in his bibliography unique IDs, which I have preserved (and called 'Benson Numbers'). Entries past 1961 don't have Benson numbers, but they have Extended Benson Numbers. For safety, the first postBenson entry (say for year 1962) I have been able to find I have assigned the EBN number 1962–5, the second 1962–10, and in general the nth entry of that year has received the EBN 1962–[5 × n]. The reason for this is that there might be a few items missing from my list that would need to be added without altering the ordering.
Informatics
 Logic & Programming languages
 I learned how to program before I learned how to prove, so when I started studying logic four years ago I got interested in the question of what exactly is the difference between logics and programming languages. At the time I was taking a class on the structure and interpretation of computer programs with Brian Harvey, so when I asked him that question he told me about this thing called lambda calculus, which he said was the mathematical basis on which LISP was based (the programming language of instruction in those prePython days was Scheme, a dialect of LISP). I studied some lambda calculus using Michael Gordon's very lucid Programming Language Theory and Its Implementation. Studying lambda calculus didn't really help answer the question, but in that book I came across a formal system called Hoare Logic, which I thought would help me answer the question. I went through some of Hoare's papers ("An axiomatic basis for computer programming", "Programs are predicates", "The mathematics of programming") that had very promising titles, but I still couldn't see the exact connection between logics and programming languages.
For a while I stopped exploring the question, but toward the end of a modal logic course with Wesley Holliday in the Fall of 2012, when I heard of propositional dynamic logic, my interest in the question was rekindled. PDL itself provided a partial solution to my question. It makes it very clear what sorts of features are sufficient for turning a modal logic into a logic of programs. Only recently, however, have I learned of this result called CurryHoward correspondence, according to which programs and proofs are two sides of the same coin. There is a lot of fascinating stuff going on there, especially in connection with intuitionistic logic. Philip Wadler has a couple of helpful articles (e.g. "Proofs are Programs") where he goes through the basics of CurryHoward correspondence, demonstrating the intimate connection between Gentzen's natural deduction system and Church's typed lambda calculus.
In the Spring of 2012 Per MartinLöf gave the prestigious twentyfourth Alfred Tarski Lectures at UC Berkeley. I was inspired by the clarity of his exposition (my notes from the lectures should be accessible from here soon), but not knowing much about intuitionistic logic I was pretty lost. The weekend after his lectures, in the same volume where I had found Hoare's "Programs are predicates", I found an article by MartinLöf called "Constructive mathematics and computer programming". It was just as clear as his Tarski Lectures! I learned from that article of the importance of intuitionistic logic. The reason the connection between programming languages and logics is not obvious, according to MartinLöf, is the implicit commitment to classical logic. In other words, it's hard to see the connection between classical logic and programming languages, but it's not difficult to see the connection between intuitionistic logic and programming languages. There is even a book that teaches how to program in what's called MartinLöf's Type Theory. That system is now at the heart of a very exciting research project in the foundations of mathematics called Homotopy Type Theory and Univalent Foundations (check out: The HoTT Book). Here are some very helpful resources (click here for more):
►
An nLab article on MartinLöf's Type Theory.
►
Nordström, Petersson, Smith (1990) Programming in MartinLöf's Type Theory.
►
Awodey (2010) Constructive Type Theory and Homotopy (Video).
►
Voevodsky (2010) Univalent Foundations of Mathematics (Video).
►
Awodey (2013) Structuralism, invariance, and univalence.
►
Awodey (2010) Type theory and homotopy.
 Computer Viruses & Cellular Automata