Birational rowmotion on a rectangle over a noncommutative ring

We extend the periodicity of birational rowmotion for rectangular posets to the case when the base field is replaced by a noncommutative ring (under appropriate conditions). This resolves a conjecture from 2014. The proof uses a novel approach and is fully self-contained. Consider labellings of a finite poset $P$ by $\left|P\right| + 2$ elements of a ring $\mathbb{K}$: one label associated with each poset element and two constant labels for the added top and bottom elements in $\hat{P}$. *Birational rowmotion* is a partial map on such labellings. It was originally defined by Einstein and Propp for $\mathbb{K}=\mathbb{R}$ as a lifting (via detropicalization) of *piecewise-linear rowmotion*, a map on the order polytope $\mathcal{O}(P) := \{\text{order-preserving } f: P \to[0,1]\}$. The latter, in turn, extends the well-studied rowmotion map on the set of order ideals (or more properly, the set of order filters) of $P$, which correspond to the vertices of $\mathcal{O}(P)$. Dynamical properties of these combinatorial maps sometimes (but not always) extend to the birational level, while results proven at the birational level always imply their combinatorial counterparts. Allowing $\mathbb{K}$ to be noncommutative, we generalize the birational level even further, and some properties are in fact lost at this step. In 2014, the authors gave the first proof of periodicity for birational rowmotion on rectangular posets (when $P$ is a product of two chains) for $\mathbb{K}$ a field, and conjectured that it survives (in an appropriately twisted form) in the noncommutative case. In this paper, we prove this noncommutative periodicity and a concomitant antipodal reciprocity formula. We end with some conjectures about periodicity for other posets, and the question of whether our results can be extended to (noncommutative) semirings.


Introduction
The goal of this paper is to extend the periodicity of birational rowmotion for rectangular posets to the case when the base field is replaced by a noncommutative ring (under appropriate conditions). This resolves a conjecture from 2014. The proof uses a novel approach (even in the commutative case) and is fully self-contained.
Let P be a finite poset, and let P be the same poset with two extra elements added: one global minimum and one global maximum. For the time being, let K be a field. A K-labeling of P means a map from P to K; we view it as a way of labeling each element of P by an element of K. Birational rowmotion, as studied conventially, is a rational map R on such labelings (i.e., a rational map R : K P K P ). It was introduced by Einstein and Propp [EinPro13] for K = R, generalizing (via the tropical limit 1 ) the wellstudied combinatorial rowmotion map on order ideals of P [BrSchr74, StWi11, ProRob13, ThoWil19].
Birational rowmotion can be defined as a composition of "toggles": For each v ∈ P , we define the v-toggle as the rational map T v : K P K P that modifies a K-labeling f by changing the label f (v) to 2 , 1 See [Kirill00, Section 4.1] for what we mean by the "tropical limit" here, and [KirBer95] for one of the earliest example of detropicalization (i.e., the generalization of a combinatorial map to a rational one). 2 The notations ⋖ and ⋗ mean "covered by" and "covers", respectively (see Sections 1 and 3 for details).
while leaving all the other labels of f unchanged. Now, birational rowmotion R is the composition of all the v-toggles, where v runs over the poset P from top to bottom. (That is, we pick a linear extension (v 1 , v 2 , . . . , v n ) of P , and set R = T v 1 • T v 2 • · · · • T vn .) Dynamical properties at the combinatorial level sometimes extend to higher levels, while results proven at the birational level always imply their combinatorial counterparts. In particular, while combinatorial rowmotion always has finite order (since it is an invertible map on a finite set), there is no reason to expect periodicity at all at the higher levels. Indeed, for many nice posets, birational rowmotion has infinite order, including for the Boolean algebra of order 3 (or those in [Roby15, Fig. 6]), and there are only a few infinite classes where it appears to have finite order (mostly posets associated with representation theory, e.g., root or minuscule posets). In these cases the order of birational rowmotion is generally the same as for combinatorial rowmotion, e.g., p + q for P = [p] × [q].
In 2014, the authors gave the first proof of periodicity of birational rowmotion for rectangular posets (i.e., when P is a product of two chains) and K a field [GriRob14]. The main idea of this proof was to embed the space of labelings into an appropriate Grassmannian (where in each "sufficiently generic" K-labeling, the labels can be expressed as ratios of certain minors of a matrix) and use particular Plücker relations to derive the result. There were several serious technical hurdles to overcome.
The definition of birational rowmotion relies entirely on addition, multiplication and inverses in K. Thus, it is natural to extend it to the case when K is a ring (not necessarily commutative), or even just a semiring. (At this level, birational rowmotion is no longer a rational map, just a partial map.) However, there is no guarantee that the properties of birational rowmotion survive at this level for every poset; and indeed, sometimes they do not (see, e.g., Example 13.9). However, in 2014, the authors experimentally observed that the periodicity for rectangular posets appears to hold even in this noncommutative setting, as long as it is appropriately modified: After p + q iterations of birational rowmotion, the labels are not returned to their original states, but rather to certain "twisted variants" thereof (resembling, but not the same as, conjugates). See Example 3.17 to get the sense of this.
Strikingly, this noncommutative generalization has resisted all approaches that have previously succeeded in the commutative case. The determinantal computations involved in the proof in [GriRob14] can be extended to the noncommutative setting using the quasideterminants of Gelfand and Retakh, but it seems impossible to make a rigorous proof out of it (lacking, e.g., any useful notation of Zariski topology in this setting, it is not clear what it means for a K-labeling to be "generic"). The alternative proof of commutative periodicity found by Musiker and Roby [MusRob17] (via a lattice-path formula for iterates of birational rowmotion) could not be generalized as well. Thus the noncommutative case remained an open problem. 3 At some point, Glick and Grinberg noticed that the Y -variables in the type-AA Zamolodchikov periodicity theorem of Volkov [Volk06] could be written as ratios of labels under iterated birational rowmotion [Roby15,§ 4.4]; this allows the periodicity in one setting to be derived from that in the other (with some work). However, for noncommutative K, Zamolodchikov periodicity fails even in small examples such as r = r ′ = 2 (no matter what order we multiply the factors), while noncommutative birational rowmotion continues to exhibit periodicity. This approach is therefore unavailable in the noncommutative case as well.
In this paper, we prove the periodicity of birational rowmotion and a concomitant antipodal reciprocity formula over an arbitrary noncommutative ring. The proof proceeds from first principles, by studying certain values A v ℓ and A v ℓ and their products along paths in the rectangle. At the core of the proof is a "conversion lemma" (Lemma 9.2), which provides an identity between a certain sum of A v ℓ products and a certain sum of A v ℓ products for the same ℓ; this equality does not actually depend on the concept of rowmotion and might be of interest on its own. Another important step is the reduction of the reciprocity claim to the labels on the "lower boundary" of the rectangle (i.e., to the labels at the elements of the form (i, 1) and (1, j)). This reduction requires subtraction, which is why we are only addressing the case of a ring, not of a semiring; the latter remains open.
A few words are in order about the relation between our birational rowmotion and a parallel construction. Combinatorial rowmotion seems first to have been defined not on the set J (P ) of order ideals of P , but rather on the set A (P ) of antichains of P [BrSchr74]. The standard bijection between J(P ) and A(P ) (by taking maximal elements of I ∈ J(P ) or saturating down from an antichain) makes it easy to go between the two maps and to see that they have the same periodicity. However, some dynamic properties (e.g., homomesy) that depend on the sets themselves are not so easily translated. Just as Einstein and Propp lifted combinatorial rowmotion on J (P ) to a birational map and we continued to the noncommutative context, Joseph and Roby did a parallel lifting on the antichain side: from antichain rowmotion to piecewise-linear rowmotion on the chain polytope, C(P ), to birational antichain rowmotion, and finally to noncommutative antichain rowmotion [JosRob20,JosRob21]. In particular they lifted "transfer maps" (originally defined by Stanley to go between O(P ) and C(P ) [Stan86]) from the piecewiselinear to the birational and noncommutative realms. These serve as equivariant bijections at each level, thus showing that periodicity at each level is equivalent for the orderideal and antichain liftings. But they were unable to find a new proof of periodicity for the piecewise-linear and higher levels, relying instead on the periodicity results for birational order-ideal rowmotion to deduce it for birational antichain rowmotion. They also lifted a useful invariant, the Stanley-Thomas word, which cyclically rotates with antichain rowmotion at each level. At the combinatorial level, this gives an equivariant bijection that proves periodicity [ProRob13, § 3.3.2]; however, it is no longer a bijection at skew fields yet fail in some noncommutative rings (such as the identity x (yx) −1 y = 1). For this reason, while natural from an algebraic point of view, the noncommutative setting is only recently and slowly getting explored.
Grinberg and Roby on Noncommutative Birational Rowmotion, higher levels. Our paper completes the story in the case of a ring: Via the transfer maps mentioned above, the periodicity of noncommutative birational order-ideal rowmotion entails the periodicity of noncommutative birational antichain rowmotion.
The paper is structured in a fairly straightforward way: In the first sections (Sections 1 to 3), we introduce our noncommutative setup and define birational rowmotion in it. These include technicalities about partial maps and the definition of noncommutative toggles. In Section 4, we state our main results. In the sections that follow, we build an arsenal of lemmas to prove these results; the proofs are completed in Section 11. (The structure of the proof is outlined at the end of Section 4.) In Sections 12 and 13, we discuss avenues for further work: a possible generalization to semirings and conjectured periodicity claims for other posets. In the final Section 14, we apply our techniques to arbitrary posets (not just rectangles), obtaining two identities.
A 12-page survey of the results of this paper (with the main steps of the proof outlined) can be found in the extended abstract [GriRob23].

Remark on the level of detail
This paper comes in two versions: a regular one and a more detailed one. The regular version is optimized for readability, leaving out the more straightforward parts and technical arguments. The more detailed version has many of them expanded. This is the regular version of the paper. The more detailed one can be obtained by replacing \excludecomment{verlong} \includecomment{vershort} by \includecomment{verlong} \excludecomment{vershort} in the preamble of the LaTeX sourcecode and then compiling to PDF. It is also available as an ancillary file on the arXiv page of this paper.

Acknowledgments
We are greatly indebted to the Mathematisches Forschungsinstitut Oberwolfach, which hosted us for three weeks during Summer 2021. Much of this paper was conceived during that stay. We thank Gerhard Huisken and Andrea Schillinger in particular for their flexibility in the scheduling of the visit.
We are also grateful to Banff International Research Station for hosting a hybrid workshop on dynamical algebraic combinatorics in November 2021 where these results were first presented.
We further acknowledge our appreciation of Michael Joseph, Tim Campion, Max Glick, Maxim Kontsevich, Gregg Musiker, Pace Nielsen, James Propp, Pasha Pylyavskyy, Bruce Sagan, Roland Speicher, David Speyer, Hugh Thomas, and Jurij Volcic, for useful advice and conversations. We thank two referees for helpful corrections and advice.
Computations using the SageMath computer algebra system [S + 09] provided essential data for us to conjecture some of the results.

Linear extensions of posets
This section collects a few standard notions concerning posets and their linear extensions, needed to define the main characters of our paper. Readers familiar with the subject may wish to skip forward to Section 2 or Section 3. We start by defining general notations identical with those in [GriRob14], to which we refer the reader for commentary and comparison to other references.
Definition 1.2. Let P be a poset, and u, v ∈ P .
(a) We will use the symbols , <, and > to denote the lesser-or-equal relation, the lesser relation, the greater-or-equal relation and the greater relation, respectively, of the poset P . (Thus, for example, "u < v" means "u is smaller than v with respect to the partial order on P ".) The elements u and v of P are said to be incomparable if we have neither u v nor u v.
(c) We write u ⋖ v if we have u < v and there is no w ∈ P such that u < w < v. One often says that "u is covered by v" to signify that u ⋖ v.
(d) We write u ⋗ v if we have u > v and there is no w ∈ P such that u > w > v.
(Thus, u ⋗ v holds if and only if v ⋖ u.) One often says that "u covers v" to signify that u ⋗ v.
(e) An element u of P is called maximal if every w ∈ P satisfying w u satisfies w = u. In other words, an element u of P is called maximal if there is no w ∈ P such that w > u.
(f) An element u of P is called minimal if every w ∈ P satisfying w u satisfies w = u. In other words, an element u of P is called minimal if there is no w ∈ P such that w < u.
These notations may become ambiguous when an element belongs to several different posets simultaneously. In such cases, we will disambiguate them by adding the words "in P " (where P is the poset which we want to use). 4 Convention 1.3. From now on, for the rest of the paper, we fix a finite poset P . Most of our results will concern the case when P has a rather specific form (viz., a rectangular poset, i.e., a Cartesian product of two finite chains), but we do not assume this straightaway.
Definition 1.4. A linear extension of P will mean a list (v 1 , v 2 , . . . , v m ) of the elements of P such that • each element of P occurs exactly once in this list, and A linear extension of P is also known as a topological sorting of P . We will use the following well-known fact: Theorem 1.5. There exists a linear extension of P .
Definition 1.6. The set of all linear extensions of P will be called L (P ). Thus, L (P ) = ∅ (by Theorem 1.5).
The reader can easily verify the following proposition: . . , v m ) (this is the tuple obtained from the tuple (v 1 , v 2 , . . . , v m ) by interchanging the adjacent entries v i and v i+1 ) is a linear extension of P as well.
We will also use the following folklore result: 5 Proposition 1.8. Let ∼ denote the equivalence relation on L (P ) generated by the following requirement: For any linear extension (v 1 , v 2 , . . . , v m ) of P and any i ∈ {1, 2, . . . , m − 1} such that the elements v i and v i+1 of P are incomparable, we set Then any two elements of L (P ) are equivalent under the relation ∼.  [Etienn84] and [Gyoja86] define linear extensions of P as bijections β : {1, 2, . . . , n} → P (where n = |P |) whose inverse map β −1 is order-preserving. This is equivalent to our definition (indeed, if β : {1, 2, . . . , n} → P is a linear extension of P in their sense, then the list (β (1) , β (2) , . . . , β (n)) is a linear extension of P in our sense).
way to transform a given linear extension into another by successively swapping adjacent incomparable entries). Another well-known fact says that any nonempty finite poset has a minimal element and a maximal element. In other words: Proposition 1.9. Assume that P = ∅. Then: (a) The poset P has a minimal element.
(b) The poset P has a maximal element.

Inverses in rings
Convention 2.1. From now on, for the rest of this paper, we fix a ring K. This ring is not required to be commutative, but must have a unity and be associative.
For example, K can be Z or Q or C or a polynomial ring or a matrix ring over any of these. In almost all previous work on birational rowmotion (with the exception of [JosRob20] and [JosRob21]), only commutative rings (and, occasionally, semirings) were considered; by removing the commutativity assumption, we are invalidating many of the methods used in prior research. We suspect that the level of generality can be increased even further, replacing our ring K by a semiring (i.e., a "ring without subtraction"); however, this poses new difficulties which we will not surmount in the present work. (See Section 12 for more about this.) Even as we do not assume our ring K to be a division ring, we will nevertheless take multiplicative inverses of elements of K on many occasions. These inverses do not always exist, but when they do exist, they are unique; thus, we introduce a notation for them: Definition 2.2. Let a be an element of K.
(a) An inverse of a means an element b ∈ K such that ab = ba = 1. This inverse is unique when it exists, and will be denoted by a. (A more standard notation for it is a −1 , but we prefer the notation a since it helps keep our formulas short.) (b) We say that the element a of K is invertible if it has an inverse.
The following well-known properties of inverses will often be used without mention: (a) If a is an invertible element of K, then its inverse a is invertible as well, and its inverse is a = a.
(b) If a and b are two invertible elements of K, then their product ab is invertible as well, and its inverse is ab = b · a.
(c) If a 1 , a 2 , . . . , a m are several invertible elements of K, then their product a 1 a 2 · · · a m is invertible as well, and its inverse is a 1 a 2 · · · a m = a m · a m−1 · · · · · a 1 .
The converse of Proposition 2.3 (b) does not necessarily hold: A product ab of two elements a and b of K can be invertible even when neither a nor b is 7 .
The next property of inverses is less well-known: 8 Proposition 2.4. Let a and b be two elements of K such that a + b is invertible. Then: (b) If both a and b are invertible, then a + b is invertible as well and its inverse is Subtracting a · a + b · a from both sides of this equality, we obtain a · a This proves Proposition 2.4 (a).
(b) Assume that both a and b are invertible. Set x := a + b and y := a · a + b · b. A similar argument (starting with b · x = ba + 1 = (a + b) · a) shows that y · x = 1, so that y is an inverse of x. Hence, x is invertible and its inverse is x = y. This is precisely the claim of Proposition 2.4 (b).

Noncommutative birational rowmotion
In this section, we introduce the basic objects whose nature we will investigate: labelings of a finite poset P by elements of a ring, and a partial map between them called "birational rowmotion". These labelings generalize the field-valued labelings studied in [GriRob14], which in turn generalize the piecewise-linear labelings of [EinPro13], which in turn generalize the order ideals of P . Many of the definitions that follow will imitate analogous definitions made (in somewhat lesser generality) in [GriRob14].
3.1. The extended poset P Definition 3.1. We define a poset P as follows: As a set, let P be the disjoint union of the set P with the two-element set {0, 1}. The smaller-or-equal relation on P will be given by (a b) ⇐⇒ ((a ∈ P and b ∈ P and a b in P ) or a = 0 or b = 1) .
Here and in the following, we regard the canonical injection of the set P into the disjoint union P as an inclusion; thus, P becomes a subposet of P .
Example 3.2. Let us represent posets by their Hasse diagrams. Then:

K-labelings
Let us now define the type of object on which our maps will act: Grinberg and Roby on Noncommutative Birational Rowmotion, Definition 3.3. A K-labeling of P will mean a map f : P → K. Thus, K P is the set of all K-labelings of P . If f is a K-labeling of P and v is an element of P , then f (v) will be called the label of f at v. This poset will later be called the "2 × 2-rectangle" in Definition 4.2. It has Hasse diagram (2, 2) The extended poset P has Hasse diagram 1 (2, 2) We recall that a K-labeling of P is a map f : P → K. We can visualize such a K-labeling by replacing, in the Hasse diagram of P , each element v ∈ P by the label f (v). For example, the Z-labeling of P that sends 0, (1, 1), (1, 2), (2, 1), (2, 2), and 1 to 12, 5, 7, −2, 10, and 14, respectively can be visualized as follows: (1) For example, its label at (1, 2) is 7.

Partial maps
We will next define the notion of a partial map, to formalize the idea of an operation whose result may be undefined, such as division on Q (since division by zero is undefined). We will use ⊥ as a symbol for such undefined values: Convention 3.5. We fix an object called ⊥. In the following, we tacitly assume that none of the sets we will consider contains this object ⊥ (unless otherwise specified).
The reader can think of ⊥ as a "division-by-zero error" (more precisely, a "divisionby-a-non-invertible-element error", since 0 is often not the only non-invertible element of K).
Definition 3.6. Let X and Y be two sets. A partial map from X to Y means a map from X to Y ⊔ {⊥}.
If f is a partial map from X to Y , then f can be canonically extended to a map from We always consider f to be extended in this way.
If f is a partial map from X to Y , then the set {x ∈ X | f (x) = ⊥} will be called the domain of definition of f .
We view the element ⊥ as an "undefined output" -i.e., we think of a partial map f from X to Y as a "map" from X to Y that is defined only on some elements of X (namely, on those whose image under this map is not ⊥). Thus, for example, in Q, division is a partial map because division by 0 is undefined: is a partial map from Q to Q.
Partial maps can be composed much like usual maps: Definition 3.8.
(a) Let X, Y and Z be three sets. Let f be a partial map from Y to Z. Let g be a partial map from X to Y .
Then f • g denotes the partial map from X to Z that sends (Following our convention that f (⊥) is understood to be ⊥, we could simplify the right hand side to just f (g (x)), but we nevertheless subdivided it into two cases just to stress the different branches in our "control flow".) This partial map f • g is called the composition of f and g.
(b) This notion of composition lets us define a category whose objects are sets and whose morphisms are partial maps. (The identity maps in this category are the obvious ones: i.e., the maps id : (c) Thus, if X is any set, and if f is any partial map from X to X, then we can Convention 3.9. Let X and Y be two sets. We will write "f : X Y " for "f is a partial map from X to Y " (just as maps from X to Y are denoted "f : X → Y ").
A warning is worth making: While we are using the symbol for partial maps here, the same symbol has been used for rational maps in [GriRob14]. The two uses serve similar purposes (they both model "maps defined only on those inputs for which the relevant denominators are invertible"), but they have some technical differences. Rational maps are defined only when K is an infinite field 9 , but are well-behaved in many ways that partial maps are not. (For example, a rational map is uniquely determined if its values on a Zariski-dense subset of its domain are known, but no such claims can be made for partial maps.) Thus, by working with partial maps instead of rational maps, we are freeing ourselves from technical assumptions on K, but at the same time forcing ourselves to be explicit about the domains on which our partial maps are defined.

Toggles
Recall that K P denotes the set of K-labelings of a poset P (that is, the set of all maps P → K). Next, we define (noncommutative) toggles: certain (fairly simple) partial maps on this set.
Definition 3.10. Let v ∈ P . We define a partial map T v : K P K P as follows: If 9 It stands to reason that a notion of "rational map" should exist for a sufficiently wide class of infinite skew-fields as well, but we have not encountered a satisfactory theory of such maps in the literature. See https://mathoverflow.net/questions/362724/ for a discussion of how this theory might start. It appears unlikely, however, that such "noncommutative rational maps" exist in the generality that we are working in (viz., arbitrary rings).
f ∈ K P is any K-labeling of P , then the K-labeling T v f ∈ K P is given by for all w ∈ P .
Here, we agree that if any part of the expression This partial map T v is called the v-toggle or the toggle at v.
Thus, the partial map T v is a "local" transformation: it only changes the label at the element v (unless its result is ⊥). exists. It may appear more natural to leave only the value (T v f ) (v) undefined, while letting all other values (T v f ) (w) equal the respective values f (w). Our choice to "panic and crash", however, will be more convenient for some of our proofs.
The v-toggle T v is called a "noncommutative order toggle" in [JosRob20, Definition 5.6]. When the ring K is commutative, this v-toggle T v is an "involution" in the sense that each For noncommutative K, this is usually not the case; an "inverse" partial map 10 can be obtained by flipping the order of the factors on the right hand side of (2). (This "inverse" appears in [JosRob20] under the name "noncommutative order elggot".) The following proposition is trivially obtained by rewriting (2); we are merely stating it for easier reference in proofs: Proposition 3.12. Let v ∈ P . For every f ∈ K P satisfying T v f = ⊥, we have the following: Furthermore, the following "locality principle" (part of [JosRob20, Proposition 5.8]) is easy to check: 11 Proposition 3.13. Let v ∈ P and w ∈ P . Then Proof of Proposition 3.13. In the case when K is commutative, this is essentially [GriRob16,Proposition 14], except that we are now more careful about well-definedness (since only invertible elements have inverses). Yet, the proof given in [GriRob16] can easily be adapted to the general (noncommutative) case. The details can be found in the detailed version of this paper (but the reader should have an easy time reconstructing them).
As a particular case of Proposition 3.13, we have the following: Corollary 3.14. Let v and w be two elements of P which are incomparable. Then Corollary 3.15. Let (v 1 , v 2 , . . . , v m ) be a linear extension of P . Then the partial map T v 1 • T v 2 • · · · • T vm : K P K P is independent of the choice of the linear extension (v 1 , v 2 , . . . , v m ).

Birational rowmotion
Recall that P is a finite poset. Corollary 3.15 lets us make the following definition.
Definition 3.16. Birational rowmotion (or, more precisely, the birational rowmotion of P ) is defined as the partial map . . , v m ) is a linear extension of P . This partial map is well-defined, because 11 In the following, equalities between partial maps are understood in the strongest possible sense: Two partial maps F : X Y and G : X Y satisfy F = G if and only if each x ∈ X satisfies F (x) = G (x). This entails, in particular, that F (x) = ⊥ holds if and only if G (x) = ⊥. Thus, F = G is a stronger requirement than merely saying that "F (x) = G (x) whenever neither F (x) nor G (x) is ⊥".
• Theorem 1.5 shows that a linear extension of P exists, and • Corollary 3.15 shows that the partial map T v 1 • T v 2 • · · · • T vm is independent of the choice of the linear extension (v 1 , v 2 , . . . , v m ).
This partial map will be denoted by R.
Birational rowmotion is called "birational NOR-motion" (and denoted NOR) in the paper [JosRob20, Definition 5.9] 12 . When K is commutative, it agrees with the standard concept of birational rowmotion as studied in [EinPro13] and [GriRob14].
Example 3.17. Let us demonstrate the effect of birational toggles and birational rowmotion. Namely, for this example, we let P be the poset P = {1, 2} × {1, 2} introduced in Example 3.4.
In order to disencumber our formulas, we agree to write g (i, j) for g ((i, j)) when g is a K-labeling of P and (i, j) is an element of P .
As in Example 3.4, we visualize a K-labeling f of P by replacing, in the Hasse diagram of P , each element v ∈ P by the label f (v). Let f be a K-labeling sending 0, (1, 1), (1, 2), (2, 1), (2, 2), and 1 to a, w, y, x, z, and b, respectively (for some elements a, b, x, y, z, w of K); this f is then visualized as follows: (As before, we draw (2, 1) on the western corner and (1, 2) on the eastern corner.) Now, recall the definition of birational rowmotion R on our poset P . Since the list . Let us track how this transforms our labeling f : We first apply T (2,2) , obtaining the K-labeling (where we colored the label at (2, 2) red to signify that it is the label at the element which got toggled). Indeed, the only label that changes under T (2,2) is the one at (2, 2), and this label becomes Grinberg and Roby on Noncommutative Birational Rowmotion, (We assume that z and b are indeed invertible; otherwise, T (2,2) f would be ⊥ and would remain ⊥ after any further toggles. Likewise, as we apply further toggles, we assume that everything else we need to invert is invertible.) Having applied T (2,2) , we next apply T (2,1) , obtaining Next, we apply T (1,2) , obtaining Finally, we apply T (1,1) , resulting in The unwieldy expression w · wx(x + y)zb + wy(x + y)zb in the label at (1, 1) can be simplified to zb (using standard laws such as p · q = qp and distributivity), so this rewrites as By repeating this procedure (or just substituting the labels of Rf obtained as variables), we can compute R 2 f , R 3 f etc., obtaining Grinberg and Roby on Noncommutative Birational Rowmotion, Here, we have omitted the label at (2, 1) for both R 3 f and R 4 f , since it can be obtained from the respective label at (1, 2) by interchanging x with y (thanks to an obvious symmetry between (1, 2) and (2, 1)).
The above might suggest that the labels get progressively more complicated as we apply R over and over. For a general poset P , this is indeed the case. However, for our poset P = {1, 2} × {1, 2}, a surprising periodicity-like pattern emerges. Indeed, our above expressions for R 2 f, R 3 f, R 4 f can be simplified as follows: Grinberg and Roby on Noncommutative Birational Rowmotion, Thus, the labels of R 4 f are closely related to those of f : For each v ∈ P , we have (This holds for v = 0 and v = 1 as well, as one can easily check.) Note that if ab = ba, then this entails that ( In Theorem 4.7, we will generalize this phenomenon to arbitrary "rectangular" posets -i.e., posets of the form {1, 2, . . . , p}×{1, 2, . . . , q} with entrywise order. The "period" in this situation will be p + q.
Our P = {1, 2}×{1, 2} example also exhibits a reciprocity-like phenomenon. Indeed, our above expressions for Rf, R 2 f, R 3 f reveal that These equalities relate the label of R i+j−1 f at an element (i, j) with the label of f at the element (3 − i, 3 − j) (which is, visually speaking, the "antipode" of the former element (i, j) on the Hasse diagram of P ). To be specific, they say that for any (i, j) ∈ P . This too can be generalized to arbitrary rectangles (Theorem 4.8).
This example shows that birational rowmotion behaves unexpectedly well for some posets. There are also some more serious motivations to study it: Birational rowmotion for commutative K generalizes Schützenberger's classical "promotion" map on semistandard tableaux (see [GriRob14,Remark 11.6]), and is closely related to the Zamolodchikov periodicity conjecture in type AA (see [Roby15,§4.4]). The case of a noncommutative ring K appears more baroque, but we expect it to find a combinatorial meaning sooner or later.
Before we formalize and prove the above phenomena, we first consider some general properties of R. We begin with an implicit description of birational rowmotion that does not involve toggles (but is essentially a restatement of Definition 3.16): Proof. This is merely the noncommutative analogue of [GriRob16,Proposition 19], and the proof in [GriRob16] can be used with straightforward modifications.
The following near-trivial fact completes the picture: Proof. None of the toggles T v , when applied to a K-labeling, changes the label of 0 or the label of 1. Hence, the same is true for the partial map R (since R is a composition of such toggles T v ).

Well-definedness lemmas
We next show some simple lemmas which say that certain inverses exist under the assumption that R ℓ f is well-defined for some values of ℓ. These lemmas are easy and unexciting, but are necessary in order to rigorously prove the more substantial results that will follow. We recommend the reader skip the proofs, at least on a first reading.
Lemma 3.21. Let f ∈ K P and k, ℓ ∈ N satisfy k ℓ and R ℓ f = ⊥. Then, R k f = ⊥.
Proof. We have P = ∅. Thus, the poset P has a maximal element y (by Proposition 1.9 (b)). This y then satisfies 1 ⋗ y in P .
Proof. The poset P has a minimal element x (by Proposition 1.9 (a)).
From R 2 f = ⊥, we obtain Rf = ⊥ (by Lemma 3.21); thus, Rf ∈ K P . Hence, Lemma 3.23 yields that f (1) is invertible. Furthermore, Lemma 3.22 (applied to Rf and x instead of f and v) yields Recall again that Rf = ⊥. Hence, Proposition 3.18 (applied to v = x) yields The only u ∈ P satisfying u ⋖ x is the element 0 of P (since x is a minimal element of P ). Thus, Solving this equality for f (0), we obtain The right hand side of this equality is a product of three invertible elements (indeed, the two factors u∈ P ; u⋗x (Rf ) (u) and f (x) are invertible because their inverses appear in (3), and we already know that the factor (Rf ) (x) is invertible), and thus itself invertible. Hence, the left hand side is invertible. In other words, f (0) is invertible.
Lemma 3.25. Let v ∈ P . Assume that v is not a minimal element of P . Then, there exists at least one element w ∈ P satisfying v ⋗ w.
Proof. Apply Proposition 1.9 (b) to the subposet P <v := {u ∈ P | u < v} of P . Details are left as an exercise.
Proof. Lemma 3.25 shows that there exists at least one element w ∈ P satisfying v ⋗ w. Consider this w. Proposition 3.18 (applied to w instead of v) yields In particular, (Rf ) (u) is well-defined for each u ∈ P satisfying u ⋗ w. Applying this to Proof. If v = 0, then the claim follows from our assumption about f (0) (since Proposition 3.19 yields (Rf ) (0) = f (0)). If v = 1, then it instead follows from Lemma 3.23 (since Proposition 3.19 yields (Rf ) (1) = f (1)). Thus, we assume from now on that v is neither 0 nor 1. Hence, v ∈ P .
If v is not a minimal element of P , then the claim follows from Lemma 3.26. Hence, we assume from now on that v is a minimal element of P . Therefore, the only u ∈ P satisfying u ⋖ v is the element 0. Thus, The right hand side of this equality is a product of three invertible elements (since f (0) is invertible, and since f (v) and u∈ P ; u⋗v (Rf ) (u) are invertible 13 ), and thus itself is invertible.
Thus, the left hand side is invertible as well. In other words, (Rf ) (v) is invertible.

The p × q-rectangle
As promised, we now state the phenomena observed in Example 3.17 in greater generality (and afterwards prove them). First we define the posets on which these phenomena manifest: Definition 4.1. For p ∈ Z, we let [p] denote the totally ordered set {1, 2, . . . , p} (with its usual total order: 1 < 2 < · · · < p). This set is empty if p 0.
, we implicitly assume that p and q are two positive integers.
The p × q-rectangle has been denoted by Rect (p, q) in [GriRob14].   1) . (4) Convention 4.4. In the following, the Hasse diagram of a p × q-rectangle will always be drawn as in (4). That is, the elements (i, j) of [p]×[q] will be aligned in a rectangular grid, with the x-axis going southeast to northwest and the y-axis going southwest to northeast. Thus, for instance, the northwestern neighbor of an element (i, j) is always (i + 1, j). Two elements s and t of P will be called adjacent if they satisfy s ⋗ t or t ⋗ s.
The poset [p] × [q] has a unique minimal element, (1, 1), and a unique maximal element, (p, q). Its covering relation can be characterized by the following easy remark (which will be used without explicit mention): . If f is a function defined on P or on P , and if (i, j) is any element of P , then we will write f (i, j) for f ((i, j)).

Periodicity
The following theorem (conjectured by the first author in 2014) generalizes the periodicitylike phenomenon seen in Example 3.17: Theorem 4.7 (Periodicity theorem for the p × q-rectangle). Let P = [p] × [q], and let f ∈ K P be a K-labeling such that R p+q f = ⊥. Set a = f (0) and b = f (1). Then, a and b are invertible, and for any x ∈ P we have If the ring K is commutative 14 , then (5) simplifies to ( then the claim of Theorem 4.7 can be rewritten as R p+q f = f , generalizing the main part of [GriRob14, Theorem 11.5] (which itself generalizes similar properties of rowmotion operators on other levels). Unlike in [GriRob14,Theorem 11.5], we cannot honestly claim that R p+q = id even when K is commutative, since the partial map R p+q takes the value ⊥ on some K-labelings f (while id does not).

Reciprocity
Theorem 4.7 shows that the "periodicity phenomenon" we have observed on [2] × [2] in Example 3.17 was not a coincidence. The "reciprocity phenomenon" is similarly the p = q = 2 case of a general fact: Theorem 4.8 directly generalizes the analogous theorem [GriRob14, Theorem 11.7] in the commutative setting.

The structure of the proofs
Theorems 4.8 and 4.7 are the main results of this paper, and most of it will be devoted to their proofs. We first summarize the large-scale structure of these proofs: 1. In Section 5, we show that twisted periodicity (Theorem 4.7) follows from reciprocity (Theorem 4.8). Thus, proving the latter will suffice.
2. In Section 6, we introduce some notations. Some of these notations (a, b and x ℓ ) are mere abbreviations for the labels of ) stand for certain derived quantities and will play a more active role. We also define "paths" on the poset P , and introduce a few of their basic features.
3. In Section 7, we prove a few simple results. The most important of these results are Proposition 7.3 (which reveals how birational rowmotion transforms A v ℓ−1 into A v ℓ ) and Theorem 7.6 (which allows us to recover the original labels x ℓ from either A v ℓ or A v ℓ ). 4. In Section 8, we prove Theorem 4.8 in the case when (i, j) = (1, 1). This proof warrants its own section both because it is conceptually easier than the general case, and because it requires some "well-definedness" technicalities that are (surprisingly) not needed in any other cases.
5. In Section 9, we saddle the main workhorse of our proof: a lemma (Lemma 9.2) that connects certain A u→v ℓ quantities with certain A u→v ℓ quantities with the same ℓ. We prove this using a variant of paths, which we call "path-jump-paths" and which allow us to interpolate between A u→v ℓ and A u→v ℓ . 6. In Section 10, we combine the previous results with this lemma to prove Theorem 4.8 in the case when j = 1.
7. In Section 11, we finally complete the proof of Theorem 4.8 in the general case. This requires almost no new ideas, just an induction that extends Theorem 4.8 from four "adjacent" elements of P (labeled u, m, s, t in diagram (47)) to the fifth element v.

Twisted periodicity follows from reciprocity
Our first step towards the proofs of twisted periodicity (Theorem 4.7) and reciprocity (Theorem 4.8) is to show that the latter implies the former. 15 Proof of Theorem 4.7 using Theorem 4.8. Assume that Theorem 4.8 has been proved. Let p, q, P , f , a and b be as in Theorem 4.7. Let x ∈ P . From p 1 and q 1, we obtain p + q 2. Hence, from R p+q f = ⊥, we obtain R 2 f = ⊥ (by Lemma 3.21). Therefore, Lemma 3.24 yields that a and b are invertible (since a = f (0) and b = f (1)).
Since x = (i, j), we can rewrite this as Thus twisted periodicity (Theorem 4.7) is proved, assuming reciprocity (Theorem 4.8) holds.

Proof of reciprocity: notations
It now suffices to prove Theorem 4.8, which will be the ultimate goal of the next few sections. First we introduce some notations that will be used throughout these sections. Fix two positive integers p and q. Assume that P = [p]×[q]. Let f ∈ K P be a K-labeling of P . Set a := f (0) and b := f (1) .
For any x = (i, j) ∈ P , we define an element x ∼ ∈ P by We call this element x ∼ the antipode of x. Thus, the desired equality (6) can be rewritten as for x = (i, j).

Grinberg and Roby on Noncommutative Birational Rowmotion,
For any x ∈ P and ℓ ∈ N, we write which is well-defined whenever R ℓ f = ⊥. This compact notation will make upcoming formulas more readable.
In particular, for each x ∈ P , we have and similarly 1 ℓ = b.
We can further rewrite the equality (8) as ). Hence, our desired Theorem 4.8 takes the following form: Proposition 3.18 yields that for each v ∈ P , we have 16 (In both sums, u ranges over P ; from now on, this will always be understood if not otherwise specified.) Applying this equality (12) to R ℓ f instead of f , we obtain for each v ∈ P and ℓ ∈ N satisfying R ℓ+1 f = ⊥ (since R R ℓ f = R ℓ+1 f ). Using (9), we can rewrite this as follows: for each v ∈ P and ℓ ∈ N satisfying R ℓ+1 f = ⊥. Next, we formally define the paths that will play a key role in the proof. A path means a sequence (v 0 , v 1 , . . . , v k ) of elements of P satisfying v 0 ⋗ v 1 ⋗ · · · ⋗ v k . We denote this path by (v 0 ⋗ v 1 ⋗ · · · ⋗ v k ), and we will call it a path from v 0 to v k (or, for short, a path v 0 → v k ). The vertices of this path are defined to be the elements v 0 , v 1 , . . . , v k . We say that this path starts at v 0 and ends at v k .
For any path p = (v 0 ⋗ v 1 ⋗ · · · ⋗ v k ) and any ℓ ∈ N, we set (assuming that the factors on the right hand sides are well-defined).
18 These elements A v ℓ and A v ℓ are not always well-defined. For A v ℓ to be well-defined, we need to have R ℓ f = ⊥, and we need the element u⋖v u ℓ to be invertible. For A v ℓ to be well-defined, we need to have R ℓ f = ⊥, and we need the elements u ℓ (for u ⋗ v) and u⋗v u ℓ and v ℓ to be invertible.

We have
Furthermore, for any ℓ ∈ N, we have The letter ℓ will always stand for a nonnegative integer (but will not be fixed).
Remark 6.2. The elements A v ℓ and A v ℓ (for v ∈ P and ℓ ∈ N) are not entirely new. They are closely connected with the down-transfer operator ∇ and the up-transfer operator ∆ studied in [JosRob20, Definition 5.11]; to be specific, we have A v ℓ = ∇R ℓ f (v) and A v ℓ = ∆ΘR ℓ f (v) using the notations of [JosRob20, Definition 5.11]. These operators ∇ and ∆ have a long history, going back to Stanley's "transfer map" φ between the order polytope and the chain polytope of a poset (see [Stan86,Definition 3.1]). The down-transfer operator ∇ does indeed restrict to φ when K is an appropriate tropical semiring. For this reason, we have been informally referring to A v ℓ and A v ℓ as the down-slack and the up-slack of v at time ℓ (harkening back to the notion of slack from linear optimization). Arguably, the behavior of these operators when K is the tropical semiring is not very indicative of the general case.
When K is commutative, our A v 0 have also implicitly appeared in [MusRob17]:

Proof of reciprocity: simple lemmas
Throughout this section, we use the notations introduced in Section 6. Let us prove some relations between the elements we have introduced. We begin with a well-definedness result: Lemma 7.1. Let ℓ ∈ N be such that ℓ 1 and R ℓ f = ⊥. Assume furthermore that a is invertible. Let v ∈ P . Then: (a) The element v ℓ is well-defined and invertible.  Proof. From R ℓ f = ⊥, we obtain R ℓ−1 f = ⊥. Hence, Corollary 3.20 yields that R ℓ−1 f (0) = f (0) = a, which is invertible by assumption.
If v = 0, then this follows from part (a), because (10) yields that v ℓ−1 = a = v ℓ in this case. An analogous argument works if v = 1. Thus, we WLOG assume that is clearly well-defined, and is invertible by Lemma 3.22 (applied to R ℓ−1 f instead of f ).
Solving the equality (17) for the first factor on its right hand side, we obtain u⋖v The right hand side of this equality is a product of three invertible elements; thus, both sides are invertible. Therefore, the element u⋖v u ℓ−1 is well-defined, hence invertible (since an inverse is always invertible). Finally, A v ℓ−1 is defined to be the product v ℓ−1 · u⋖v u ℓ−1 , and thus is well-defined and invertible because both of its factors are. Grinberg Here, we assume that all the terms in the respective equalities are well-defined.
Proof. Since s = t, every path from s to t must contain an element covered by s as its second vertex. Fix an element u ∈ P satisfying s ⋗ u. If (v 0 ⋗ v 1 ⋗ · · · ⋗ v k ) is a path from s to t satisfying v 1 = u, then (v 1 ⋗ v 2 ⋗ · · · ⋗ v k ) is a path from u to t. Hence, we have found a map . This map is a bijection (since any path from u to t can be uniquely extended to a path from s to t by inserting the vertex s at the front). We can use this bijection to substitute (v 1 ⋗ v 2 ⋗ · · · ⋗ v k ) for p in a sum that ranges over all paths p from u to t. In particular, Now, forget that we fixed u. We thus have proved (22) for each u ∈ P satisfying s ⋗ u.

The definition of
(v 0 ⋗v 1 ⋗···⋗v k ) is a path from s to t; v 1 =u (because any path (v 0 ⋗v 1 ⋗···⋗v k ) from s to t has a well-defined second vertex v 1 , and this second vertex v 1 satisfies s⋗v 1 ) This proves (18). The same argument (but with each A symbol replaced by an A symbol) proves (20). Moreover, a similar argument (but now classifying paths from s to t according to their second-to-last vertex instead of their second vertex) establishes (19) and (21). Thus, Proposition 7.2 is proven.

The next proposition uses the products
A v ℓ and A v ℓ−1 to rewrite the equality (13) (which is essentially the definition of birational rowmotion) in a slick way: Grinberg and Roby on Noncommutative Birational Rowmotion,

Proposition 7.3 (Transition equation in A-
A -form). Let v ∈ P and ℓ 1 be such that R ℓ f = ⊥. Assume that a is invertible. Then, Proof. If v is 0 or 1, then the equality A v ℓ = A v ℓ−1 holds because both of its sides are 1 (by (14)). Thus, we assume WLOG that v ∈ P .
Lemma 7.1 (a) yields that v ℓ is well-defined and invertible, while Lemma 7.1 (c,d) yield that A v ℓ and A v ℓ−1 are well-defined. Since A v ℓ−1 is defined as v ℓ−1 · u⋖v u ℓ−1 , this entails that u⋖v u ℓ−1 is invertible.

But the left hand side of this equality is
The next theorem gives ways to recover the labels u ℓ = R ℓ f (u) from some of the sums defined in (15) and (16). 19 Theorem 7.6 (path formulas for rectangle). Let ℓ ∈ N. Assume that a is invertible. Then: (a) If R ℓ f = ⊥ and ℓ 1, then each u ∈ P satisfies (c) If R ℓ f = ⊥ and ℓ 1, then each u ∈ P satisfies Proof of Theorem 7.6. (a) Assume that R ℓ f = ⊥ and ℓ 1. Then, Lemma 7.1 (d) yields that the element A v ℓ is well-defined and invertible for each v ∈ P . Hence, the element A p ℓ is well-defined for each path p. Therefore, the element A 1→u ℓ is well-defined for each u ∈ P .
Next, we will prove the equality (The u ℓ on the right hand side here is well-defined, since Lemma 7.1 (a) (applied to v = u) shows that u ℓ is well-defined and invertible.) Proof of (25). We utilize downwards induction on u. This is a version of strong induction in which we fix an element v ∈ P and assume (as the induction hypothesis) that (25) holds for all u ∈ P satisfying u > v. We will then prove that (25) also holds for u = v.
Since the poset P is finite, this will entail that (25) holds for all u ∈ P .
19 The condition ℓ 1 in Theorem 7.6 (a) and (c) is meant to ensure that A Let v ∈ P . Assume (as the induction hypothesis) that (25) holds for all u ∈ P satisfying u > v. In other words, we have A 1→u ℓ = bu ℓ for each u ∈ P satisfying u > v. Thus, in particular, we have Note also that the only path from 1 to 1 is the trivial path (1). Hence, (since 1 ℓ = b). However, 1 = v (since 1 / ∈ P and v ∈ P ). Thus, (21) (applied to s = 1 and t = v) yields In other words, (25) holds for u = v. This completes the induction step. Thus, we have proved (25) by induction.
(b) This proof is rather similar to that of part (a), but uses upwards induction instead of downwards induction (and applies (18) instead of (21)).
(c) Let u ∈ P . Recall that (p, q) is the unique maximal element of P . Therefore, each path from 1 to u begins with the step 1 ⋗ (p, q). Thus, A Remark 7.7. Corollary 7.5, Proposition 7.2 and parts (a) and (b) of Theorem 7.6 hold more generally if P is replaced by any finite poset (not necessarily a rectangle). The proofs we gave above work in that generality. Parts (c) and (d) of Theorem 7.6 can be similarly generalized as long as the poset P has a global maximum (for part (c)) and a global minimum (for part (d)); all we need to do is to replace (p, q) by the global maximum and (1, 1) by the global minimum. We will have no need for this generality, though.
8. Proof of reciprocity: the case (i, j) = (1, 1) Now, we are mostly ready to prove that Theorem 4.8 holds in the case when (i, j) = (1, 1). For reasons both technical and pedagogical, it is useful for us to dispose of this case now in order to have less work to do later. First, we prove Theorem 4.8 for (i, j) = (1, 1) under the extra assumption that a is invertible: Assume that a is invertible. Then, Proof. We use the notations from Section 6. Thus, R ℓ f (1, 1) = (1, 1) ℓ and (by Theorem 7.6 (d), applied to ℓ − 1 and (p, q) instead of ℓ and u). Solving this equation for A (p,q)→(1,1) ℓ−1 , we obtain (since a is invertible). Note also that R R ℓ−1 f = R ℓ f = ⊥, and thus R ℓ−1 f (p, q) is invertible (by Lemma 3.22, applied to R ℓ−1 f and (p, q) instead of f and v). Now, (by Theorem 7.6 (c), applied to u = (1, 1)) This proves Lemma 8.1.
Unfortunately, our proof of Lemma 8.1 made use of the requirement that a be invertible, since A (p,q)→(1,1) ℓ and A (p,q)→(1,1) ℓ−1 would not be well-defined otherwise. In order to remove this requirement, we make use of a trick, in which we "temporarily" set the label f (0) to 1 and then argue that this has a predictable effect on (Rf ) (1, 1). This trick relies on the following: Lemma 8.2. Let P be an arbitrary finite poset (not necessarily [p]×[q]). Let f, g ∈ K P be two K-labelings such that Rf = ⊥. Assume that Assume furthermore that g (0) = 1. Set a = f (0). Then: (a) We have Rg = ⊥.

(b)
If v ∈ P is not a minimal element of P , then (Rf ) (v) = (Rg) (v).

Proof of Lemma 8.2 (sketched).
Our assumption (29) shows that the labels of f equal the corresponding labels of g at all elements of P other than at 0. Only the labels at 0 can differ. Compute the labelings Rf and Rg recursively, as we did in Example 3.17, making sure to pick a linear extension of P that starts with all minimal elements of P (so that the toggles at these minimal elements all happen at the very end of our computation). The computation for Rf proceeds identically with the computation for Rg until we "interact with" the different labels at 0 -that is, until the labels f (0) and g (0) make an appearance in the sums u∈ P ; u⋖v f (u) and u∈ P ; u⋖v g (u), respectively (because all other labels of f equal the corresponding labels of g). However, this "interaction" only happens when we toggle at a minimal element of P (since v has to be minimal in order for f (0) to be an addend of the sum u∈ P ; u⋖v f (u)). Furthermore, when we do toggle at a minimal element v of P , the relevant sums u∈ P ; u⋖v f (u) and u∈ P ; u⋖v g (u) simplify to f (0) = a and g (0) = 1, respectively (because 0 is the only element u ∈ P satisfying u ⋖ v). Therefore, the labels of Rf and Rg at v end up differing by a factor of a (more precisely, the value of Rf at v ends up being a times the label of Rg at v). This proves Lemma 8.2.
Let us now get rid of the "a is invertible" requirement in Lemma 8.1: Then, Proof. If R 2 f = ⊥, then Lemma 3.24 yields that a and b are invertible (since a = f (0) and b = f (1)), and therefore our claim follows directly from Lemma 8.1. For this reason, we WLOG assume that R 2 f = ⊥. If we had ℓ 2, then we would thus conclude that R ℓ f = ⊥ as well, which would contradict R ℓ f = ⊥. Hence, we must have ℓ < 2, so that ℓ = 1. Therefore, R ℓ−1 = R 1−1 = R 0 = id and consequently R ℓ−1 f (p, q) = f (p, q). Also, R ℓ = R (since ℓ = 1). Hence, R = R ℓ , so that Rf = R ℓ f = ⊥. Now, let g ∈ K P be the K-labeling that is obtained from f by replacing the label f (0) by 1. Thus, we have and we have g (0) = 1. Then, Lemma 8.2 (a) yields Rg = ⊥. In other words, R 1 g = ⊥.
In view of R ℓ = R and R ℓ−1 f (p, q) = f (p, q), we can rewrite this as Thus, Lemma 8.3 is proven.
This settles the easiest case of Theorem 4.8 -namely, the case (i, j) = (1, 1). To get a grip on the general case, we need more lemmas.

The conversion lemma
We continue using the notations from Section 6.
Lemma 9.1 (Four neighbors lemma). Let u, v, w, d be four adjacent elements of P that are arranged as follows on the Hasse diagram of P : , v = (i + 1, j), w = (i, j + 1) and u = (i + 1, j + 1) for some i ∈ [p − 1] and some j ∈ [q − 1]). Assume that a is invertible. Let ℓ 1 be such that R ℓ+1 f = ⊥. Then: and thus R ℓ f = ⊥. Hence, Lemma 7.1 (a) yields that v ℓ is invertible. Similarly, w ℓ and u ℓ and d ℓ are invertible. Also, Lemma 7.1 (d) (applied to d instead of v) yields that the element A d ℓ is well-defined and invertible. Moreover, Lemma 7.1 (c) (applied to u and ℓ + 1 instead of v and ℓ) yields that the element A u ℓ is well-defined and invertible. The elements s ∈ P that satisfy s ⋗ d are v and w. Hence, s⋗d s ℓ = v ℓ + w ℓ (where, of course, the sum ranges over s ∈ P ). Now, the definition of A d ℓ yields The elements s ∈ P that satisfy s ⋖ u are v and w. Hence, Since this is well-defined, the element v ℓ + w ℓ of K must be invertible. Also, we already know that v ℓ and w ℓ are invertible. Hence, Proposition 2.4 (b) (applied to v ℓ and w ℓ instead of a and b) yields that v ℓ + w ℓ is invertible as well and Comparing this with This can be proved by the same argument that we used to prove part (a) (with the roles of v and w interchanged).
We recall our conventions for drawing the p × q-rectangle P = [p] × [q]. In light of these conventions, we shall refer to the set {(k, q) | k ∈ [p]} as the northeastern edge of P , and to the set {(i, 1) | i ∈ [p]} as the southwestern edge of P .
The next lemma is crucial, as it allows us to "convert" between A's and A 's without changing the subscript.
Assume that a is invertible. Let ℓ 1 be such that R ℓ+1 f = ⊥. Then we have:

Grinberg and Roby on Noncommutative Birational Rowmotion,
Here is an illustration for this lemma: . In the case when K is commutative, Lemma 9.2 was independently discovered by Johnson and Liu [JohLiu22]. More precisely, [JohLiu22, Lemma 4.1] extends it from sums over paths (such as A u→d ℓ and A u ′ →d ′ ℓ ) to sums over k-tuples of non-intersecting paths. It is unclear whether this extension can still be made when K is not commutative (what order should the A v ℓ 's along different paths be multiplied in?), but the use of determinants likely precludes any noncommutative generalization of the proof in [JohLiu22].
Proof of Lemma 9.2. Let ℓ ∈ N. We "interpolate" between the paths from u to d and the paths from u ′ to d ′ using what we call "path-jump-paths". To define these formally, we introduce some more basic notations.
The first coordinate of any x ∈ P will be denoted by first x. Thus, first (i, j) = i for any (i, j) ∈ P .
Furthermore, for any x = (i, j) ∈ P , we define the rank of x to be the positive integer i + j − 1. This rank will be denoted by rank x.
We define a new binary relation ◮ on the set P as follows: If x and y are two elements of P , then the relation x ◮ y means "rank x = rank y + 1 and first x > first y". In other words, the relation x ◮ y means that if x = (i, j) , then y = (i − k, j + k − 1) for some k > 0.
Visually speaking, it means that y is one step southeast and a (nonnegative) amount of steps east of x (on the Hasse diagram).
We define a path-jump-path to be a tuple p = (v 0 , v 1 , . . . , v k ) of elements of P along with a chosen number i ∈ {0, 1, . . . , k − 1} such that the chain of relations holds. We denote this path-jump-path simply by and we say that this path-jump-path p has jump at i. The elements v 0 , v 1 , . . . , v k are called the vertices of this path-jump-path. The pairs (v j , v j+1 ) of consecutive vertices are called the steps of this path-jump-path. Such a step (v j , v j+1 ) is said to be a ⋗-step if j = i, and it is said to be a ◮-step if j = i.
Here is an example of a path-jump-path, where the red edge is the ◮-step: (Note that two vertices x and y can satisfy x ◮ y and x ⋗ y simultaneously. Thus, it can happen that several path-jump-paths with jumps at different i's contain the same vertices. We nevertheless do not consider these path-jump-paths to be identical, because we understand a path-jump-path like (33) to "remember" not only its vertices v 0 , v 1 , . . . , v k but also the value of i.) A path-jump-path from u to d ′ will mean a path-jump-path We note that if two elements x and y of P satisfy x ⋗ y or x ◮ y, then rank y = rank x − 1.
As a consequence of this fact, successive entries v j−1 and v j in a path-jump-path . In other words, the ranks of the vertices of a path-jump-path decrease by 1 at each step.
Hence, the difference in ranks between the first and final entries of a path-jump-path is one less than its number of entries: Let r := rank u −rank (d ′ ). Thus, any path-jump-path from u to d ′ must contain exactly r + 1 vertices (by (35)). In other words, any path-jump-path from u to d ′ must have the form We have R R ℓ f = R ℓ+1 f = ⊥ = R (⊥) and thus R ℓ f = ⊥. Hence, Lemma 7.1 (a) yields that v ℓ is well-defined and invertible for each v ∈ P . Also, Lemma 7.1 (d) yields that A v ℓ is well-defined and invertible for each v ∈ P . Moreover, Lemma 7.1 (c) (applied to ℓ + 1 instead of ℓ) yields that A v ℓ is well-defined and invertible for each v ∈ P . In this proof, we will not consider any K-labelings other than R ℓ f . Thus, the only labels we will be using are the labels v ℓ = R ℓ f (v) for v ∈ P . Thus, we agree to use the following shorthand notation: If v ∈ P , then the elements v ℓ , A v ℓ and A v ℓ of K will be denoted simply by v, A v and A v , respectively. In other words, we shall omit subscripts when these subscripts are ℓ. For instance, the product A u ℓ u ℓ u ′ ℓ will thus be abbreviated as A u uu ′ .
For any path-jump-path that contains r + 1 vertices, we set Now we claim the following (again omitting subscripts that are ℓ): Claim 3: For each j ∈ {0, 1, . . . , r − 2}, we have p is a path-jump-path from u to d ′ with jump at j E p = p is a path-jump-path from u to d ′ with jump at j+1 E p .

Grinberg and Roby on Noncommutative Birational Rowmotion,
Before we prove these three claims, let us explain how Lemma 9.2 will follow from them: p is a path-jump-path from u to d ′ with jump at 1 E p (by Claim 3, applied to j = 0) = p is a path-jump-path from u to d ′ with jump at 2 E p (by Claim 3, applied to j = 1) Hence, Lemma 9.2 will follow once Claims 1, 2 and 3 have been proved. Let us now prove these three claims: Proof of Claim 1. We know that d lies on the southwestern edge of P . Hence, the only s ∈ P satisfying s ⋖ d is d ′ (since d ⋗ d ′ ). Therefore, s∈ P ; s⋖d . Since we omit subscripts (when these subscripts are ℓ), we can rewrite this as We know that any path-jump-path from u to d ′ must have the form If such a path-jump-path has jump at r − 1, then it must have the form (v 0 ⋗ v 1 ⋗ · · · ⋗ v r−1 ◮ v r ); that is, its last step (v r−1 , v r ) is an ◮-step. However, since it ends at d ′ , we must have v r = d ′ and thus v r−1 ◮ v r = d ′ . This entails v r−1 = d (since the only g ∈ P satisfying g ◮ d ′ is d 20 ), and therefore . In other words, the last step of this path-jump-path is (d, d ′ ).
We have thus shown that if a path-jump-path from u to d ′ has jump at r − 1, then its last step is (d, d ′ ). Hence, any path-jump-path from u to d ′ with jump at r − 1 must have the form where (v 0 ⋗ v 1 ⋗ · · · ⋗ v r−1 ) is a path from u to d. Conversely, any tuple of the latter form is a path-jump-path from u to d ′ with jump at r − 1 (since d ◮ d ′ ). Therefore, p is a path-jump-path from u to d ′ with jump at r−1 This proves Claim 1.
Proof of Claim 2. This is analogous to the proof of Claim 1. This time, we need to argue that if a path-jump-path from u to d ′ has jump at 0, then its first step is (u, u ′ ) (since the only g ∈ P satisfying u ◮ g is u ′ ).
Proving Claim 3 is a bit trickier. As an auxiliary result, we first show the following: Claim 4: Let s and t be two elements of P . Then, x∈P ; s◮x⋗t Proof of Claim 4. First, we observe that an x ∈ P satisfying s ◮ x ⋗ t cannot exist unless rank t = rank s − 2 (because (34) yields that such an x must satisfy rank x = rank s − 1 and rank t = rank x − 1, whence rank t = rank x − 1 = (rank s − 1) − 1 = rank s − 2). Hence, the left hand side of the desired equality (37) is an empty sum unless rank t = rank s − 2. Similarly, the same can be said about the right hand side. Thus, (37) boils down to 0 = 0 unless rank t = rank s − 2. We therefore assume WLOG that rank t = rank s − 2. In other words, rank s − rank t = 2. In terms of the way that we draw our poset P , this means that the point s lies two rows above the point t.
Omitting the subscripts, we can rewrite this as The definition of A s ℓ yields A s ℓ = s ℓ · x⋖s x ℓ . Omitting the subscripts, we can rewrite this as Write the elements s, t ∈ P in the forms s = (i, j) and t = (i ′ , j ′ ). Then, rank s = i+j−1 We are in one of the following three cases: Representative examples for these three cases are illustrated in the following pictures: (the bullets signify the positions of potential neighbors of s and t; some of these positions may fall outside of P , but this does not disturb our argument). In terms of the way we draw our poset P , the three cases can be reformulated as "the point s lies further west than t" (Case 1), "the point s lies due north of t" (Case 2) and "the point s lies further east than t" (Case 3). Note that two elements x, y ∈ P satisfy x ◮ y if and only if y lies one step south and some arbitrary distance east of x in our pictures. Let us first consider Case 1. In this case, the point s lies further west than t. Thus, s lies further west than any neighbor of t as well 21 . Hence, each element x of P that satisfies x ⋗ t must satisfy s ◮ x automatically. Therefore, the summation sign x∈P ; s◮x⋗t can be simplified to x∈P ; x⋗t , and even further to x⋗t (because any x ∈ P that satisfies x ⋗ t must belong to P automatically 22 ). Hence, x∈P ; s◮x⋗t Recall again that the point s lies further west than t. Thus, any neighbor of s lies further west than t as well (since s lies two rows above t). Hence, each element x of P that satisfies s ⋗ x must satisfy x ◮ t automatically. Therefore, the summation sign x∈P ; s⋗x◮t can be simplified to x∈P ; s⋗x = x∈P ; x⋖s , and even further to x⋖s (because any x ∈ P that satisfies x ⋖ s must belong to P automatically 23 ). Hence, x∈P ; s⋗x◮t Comparing this with (40), we obtain Let v := (i, j − 1) and w := (i − 1, j). In our coordinate system, the four points are arranged in a 1 × 1-square, which looks as follows: Hence, v and w belong to P (since s and t belong to P ), and furthermore, Lemma 9.1 (b) (applied to s and t instead of u and d) yields 22 Indeed, the rank of any such x must lie between the ranks of s and t, and thus x cannot be 0 or 1. 23 Indeed, the rank of any such x must lie between the ranks of s and t, and thus x cannot be 0 or 1.
Since we are omitting subscripts, we can rewrite this as follows: The picture (41) shows that we have s ◮ w but not s ◮ v. Hence, there is only one element x ∈ P that satisfies s ◮ x ⋗ t; namely, this element x is w. Hence, x∈P ; s◮x⋗t On the other hand, the picture (41) shows that we have v ◮ t but not w ◮ t. Hence, there is only one element x ∈ P that satisfies s ⋗ x ◮ t; namely, this element x is v. Hence, Comparing this with (42), we obtain x∈P ; s◮x⋗t Thus, Claim 4 is proved in Case 2. Let us finally consider Case 3. In this case, we have i ′ > i − 1. Thus, i ′ i (since i ′ and i are integers), so that i i ′ . Note that i = first s (since s = (i, j)) and i ′ = first t (since t = (i ′ , j ′ )).
There exists no x ∈ P satisfying s ◮ x ⋗ t (because if x ∈ P satisfies s ◮ x ⋗ t, then x ⋗ t = (i ′ , j ′ ) entails first x i ′ i = first s, but this clearly contradicts s ◮ x). Hence, the sum x∈P ; s◮x⋗t sx A t is empty. Thus, x∈P ; s◮x⋗t Furthermore, there exists no x ∈ P satisfying s ⋗ x ◮ t (because if x ∈ P satisfies s ⋗ x ◮ t, then (i, j) = s ⋗ x entails first x i i ′ = first t; but this clearly contradicts x ◮ t). Hence, the sum is proved in Case 3.
We have now proved Claim 4 in all three cases.
We can now step to the proof of Claim 3: Proof of Claim 3. Let j ∈ {0, 1, . . . , r − 2}. We know that any path-jump-path from u to d ′ must have the form If such a path-jump-path has jump at j, then it must have the form is a path-jump-path from u to d ′ with jump at j E (v 0 ⋗v 1 ⋗···⋗v j ◮v j+1 ⋗v j+2 ⋗···⋗vr) A v j xv j+2 (by Claim 4, applied to s=v j and t=v j+2 ) We know that any path-jump-path from u to d ′ must have the form If such a path-jump-path has jump at j + 1, is a path-jump-path from u to d ′ with jump at j+1 E (v 0 ⋗v 1 ⋗···⋗v j+1 ◮v j+2 ⋗v j+3 ⋗···⋗vr) Comparing our last two equalities, we obtain p is a path-jump-path from u to d ′ with jump at j E p = p is a path-jump-path from u to d ′ with jump at j+1 E p .
Thus, Claim 3 is proven.
We have now proved all three Claims 1, 2 and 3. As we explained, this completes the proof of Lemma 9.2. Remark 9.3. Parts of the above proof of Lemma 9.2 can be rewritten in a more abstract (although probably not shorter) manner, avoiding the notion of a "path-jumppath" and the nested sums that appeared in our proof of Claim 3.
To rewrite the proof, we need the notion of P × P -matrices. A P × P -matrix is a matrix whose rows and columns are indexed not by integers but by elements of P . (That is, it is a family of elements of K indexed by pairs (i, j) ∈ P × P .) If C is any P × P -matrix, and if i and j are two elements of P , then the (i, j)-th entry of C is denoted by C i,j . Addition and multiplication are defined for P × P -matrices in the same way as they are for usual matrices. That is, for any P × P -matrices C and D and any (i, j) ∈ P × P , we have For any statement A, we let [A] be the Iverson bracket (i.e., truth value) of A. That is, for all x, y ∈ P.
Here, the relation x ◮ y is defined as in the above proof of Lemma 9.2, and we are again omitting the "ℓ" subscripts, so (for instance) "xy" actually means x ℓ y ℓ . Now, Claim 4 in our above proof of Lemma 9.2 can be rewritten in a nice and compact form as the equality AU = U A .
From this, we easily obtain This equality essentially replaces Claim 3 in the above proof. Setting k = rank u − rank d in (43), and comparing the (u, d ′ )-entries of both sides, we quickly obtain A u→d = A u ′ →d ′ (since x ◮ d ′ holds only for x = d, and since u ◮ x holds only for x = u ′ ). This proves Lemma 9.2 again.
Hence, for the rest of this proof, we WLOG assume that i = 1. Thus, i 2, so that ℓ i 2, and therefore R 2 f = ⊥ (by Lemma 3.21, since R ℓ f = ⊥). Hence, Lemma 3.24 yields that a and b are invertible (since a = f (0) and b = f (1)).
In analogy to Lemma 10.1, we have the following: . Let j ∈ [q]. Let ℓ ∈ N satisfy ℓ j. Let f ∈ K P be a K-labeling such that R ℓ f = ⊥. Let a = f (0) and b = f (1). Then, using the notations from Section 6, we have Proof. The two coordinates u and v of an element (u, v) ∈ P play symmetric roles. Lemma 10.2 is just Lemma 10.1 with the roles of these two coordinates interchanged. Thus, the proof of Lemma 10.2 is analogous to the proof of Lemma 10.1.

Proof of reciprocity: the general case
Somewhat surprisingly, the general case of Theorem 4.8 follows by a fairly straightforward induction argument from Lemma 10.1: Proof of Theorem 4.8. We again use the notations from Section 6.
For any (i, j) ∈ P , we define tilt (i, j) to be the positive integer i + 2j. Our goal is to prove (11) for each x = (i, j) ∈ P and ℓ ∈ N satisfying ℓ − i − j + 1 0 and R ℓ f = ⊥.
We will now prove this by strong induction on tilt x.
Induction step: Fix N ∈ N. Assume (as the induction hypothesis) that (11) holds for each x = (i, j) ∈ P satisfying tilt x < N and each ℓ ∈ N satisfying ℓ − i − j + 1 0 and R ℓ f = ⊥.
We now fix an element v = (i, j) ∈ P satisfying tilt v = N and an ℓ ∈ N satisfying ℓ − i − j + 1 0 and R ℓ f = ⊥. Our goal is to prove that (11) holds for x = v. In other words, our goal is to prove that v ℓ = a · v ∼ ℓ−i−j+1 · b. We have N = tilt v = i + 2j (since v = (i, j)). We are in one of the following six cases: Case 1: We have i = 1. Case 2: We have j = 1. Case 3: We have j = 2 and 1 < i < p. Case 4: We have j = 2 and i = p > 1. Case 5: We have j > 2 and 1 < i < p. Case 6: We have j > 2 and i = p > 1.
Let us first consider Case 1. In this case, we have i = 1.
In other words, ℓ j. Hence, Lemma 10.2 yields In view of v = (1, j) and v ∼ = (p, q + 1 − j) and ℓ − i − j + 1 = ℓ − j, we can rewrite this as is proved in Case 1. Similarly (but using Lemma 10.1 instead of Lemma 10.2), we can obtain the same result (viz., v ℓ = a · v ∼ ℓ−i−j+1 · b) in Case 2. Next, let us analyze the four remaining cases: Cases 3, 4, 5 and 6. The most complex of these four cases is Case 5, so it is this case that we start with.
In this case, we have j > 2 and 1 < i < p. Recall that v = (i, j). Define the four further pairs The conditions j > 2 and 1 < i < p entail that all these four pairs m, u, s and t belong to [p] × [q] = P . Here is how the five elements v, m, u, s, t of P are aligned on the Hasse diagram of P : u In particular, the two elements of P that cover m are u and v, whereas the two elements of P that are covered by m are s and t. The map P → P, x → x ∼ (which can be visualized as "reflecting" each point in P around the center of the rectangle [p] × [q]) "reverses" covering relations (i.e., if x, y ∈ P satisfy x ⋗ y, then x ∼ ⋖ y ∼ ). Hence, applying this map to the diagram (47) yields In particular, the two elements of P that are covered by m ∼ are u ∼ and v ∼ , whereas the two elements of P that cover m ∼ are s ∼ and t ∼ . From ℓ − i − j + 1 0, we obtain ℓ i >1 + j >2 −1 > 1 + 2 − 1 = 2, so that ℓ 2.
Therefore, ℓ − 1 ∈ N and 2 ℓ. Hence, from R ℓ f = ⊥, we obtain R 2 f = ⊥ (by Lemma 3.21). Therefore, Lemma 3.24 yields that a and b are invertible (since a = f (0) and b = f (1)). Also, we have Set k := i + j − 2. Then, k 0 (since i 1 and j 1), so that k ∈ N. Now, straightforward computations show that the four elements m, u, s and t of P satisfy tilt m < N, tilt u < N, tilt s < N, tilt t < N (since i + 2j = N). Hence, using the induction hypothesis, it is easy to see that the five equalities hold 24 . We have ℓ − 1 ∈ N and R ℓ f = ⊥. Hence, the transition equation (13) (applied to m and ℓ − 1 instead of v and ℓ) yields x ℓ (here we have renamed the summation indices u from (13) as x, since the letter u is already being used for something else in our current setting). Thus, m ℓ = x⋖m x ℓ−1 =s ℓ−1 +t ℓ−1 (since the two elements of P that are covered by m are s and t) · m ℓ−1 · x⋗m x ℓ =u ℓ +v ℓ (since the two elements of P that cover m are u and v) On the other hand, from k = i + j − 2, we obtain ℓ − k − 1 = ℓ − i − j + 1 0. Thus, ℓ − k − 1 ∈ N. Also, ℓ − k 0 ℓ, so that R ℓ−k f = ⊥ (by Lemma 3.21, since R ℓ f = ⊥). 24 In more detail: The induction hypothesis tells us that ...
Hence, the transition equation (13) (applied to m ∼ and ℓ − k − 1 instead of v and ℓ) yields (since the two elements of P that are covered by m ∼ are u ∼ and v ∼ ) (since the two elements of P that cover m ∼ are s ∼ and t ∼ ) This entails that the elements s ∼ ℓ−k +t ∼ ℓ−k and m ∼ ℓ−k−1 of K are invertible (since their inverses appear on the right hand side of this equality). Hence, their product is invertible as well. Also, ℓ − k 1 (since ℓ − k − 1 0) and R ℓ−k f = ⊥. Hence, Lemma 7.1 (a) (applied to ℓ − k and m ∼ instead of ℓ and v) shows that m ∼ ℓ−k is well-defined and invertible. Now, and m ∼ ℓ−k−1 on the right hand side are invertible). Taking reciprocals on both sides of (54), we obtain (by Proposition 2.3 (c)). Comparing (53) with (48), we obtain 2.3 (c), since a and m ∼ ℓ−k−1 and b are invertible) Multiplying both sides of this equality by a on the left and by b on the right (this is allowed, since a and b are invertible), we obtain Comparing this with (55), we obtain Expanding the left hand side by distributivity, we rewrite this as However, (52) yields Proposition 2.3 (c)) .
Multiplying both sides by b from the left and by a from the right, we can transform this into b · u ℓ · a = u ∼ ℓ−k−1 . Subtracting this equality from (56), we obtain This equality expresses v ∼ ℓ−k−1 as a product of three invertible elements (namely, b, v ℓ and a). Thus, v ∼ ℓ−k−1 is itself invertible. Taking reciprocals on both sides of (57), we now obtain 2.3 (c)) .

Solving this for
.
The arguments required to prove v ℓ = a · v ∼ ℓ−i−j+1 · b in the Cases 3, 4 and 6 are similar to the one we have used in Case 5, but simpler: • In Case 3, we have s / ∈ P . The "neighborhood" of m thus looks as follows: (instead of looking as in (47)). This necessitates some changes to the proof; in particular, all addends that involve s or s ∼ in any way need to be removed, along with the equality (49).
• Case 6 is similar, but now we have u / ∈ P instead. (Subtraction is no longer required in this case.) • In Case 4, we have both s / ∈ P and u / ∈ P .
Thus, we have proved the equality v ℓ = a · v ∼ ℓ−i−j+1 · b in all six Cases 1, 2, 3, 4, 5 and 6. Hence, this equality always holds. In other words, (11) holds for x = v. This completes the induction step. Thus, (11) is proved by induction. In other words, Theorem 4.8 is proven.
As we have already seen (in Section 5), this entails that Theorem 4.7 is proven as well.

The case of a semiring
An attentive reader may have noticed that nowhere in the definitions of v-toggles and birational rowmotion do any subtraction sign appear. This means that all these definitions can be extended to the case when K is not a ring but a semiring.
A semiring is a set K equipped with a structure of an abelian semigroup (K, +) and the structure of a (not necessarily abelian) monoid (K, ·, 1) such that the distributive laws (a + b) c = ac + bc and a (b + c) = ab + ac are satisfied (where we use the shorthand notation xy for x · y). Some standard concepts defined for rings can be straightforwardly generalized to semirings; in particular, any nonempty finite family (a i ) i∈I of elements of a semiring K has a well-defined sum i∈I a i . Definition 2.2, too, applies verbatim to the case when K is a semiring instead of a ring. Thus, the definition of a v-toggle (Definition 3.10) and the definition of birational rowmotion (Definition 3.16) can be applied to a semiring K as well. We thus can wonder: Question 12.1. Do twisted periodicity (Theorem 4.7) and reciprocity (Theorem 4.8) still hold if K is not a ring but merely a semiring?
If we assume that K is commutative, then the answer to this question is positive, for fairly simple general reasons (see [GriRob16,Remark 10]). However, no such general reasoning helps for noncommutative K. Indeed, there are subtraction-free identities involving inverses that hold for all rings but fail for some semirings. One example is the identity a · a + b · b = b · a + b · a from Proposition 2.4 (a): David Speyer has constructed an example of a semiring K and two elements a and b of K such that a + b is invertible (actually, a + b = 1 in his example), but this identity does not hold. See [Speyer21] for details.
Of course, this does not mean that the answer to Question 12.1 is negative; we are, in fact, inclined to suspect that the question has a positive answer. Our proofs of Lemma 10.1 and Lemma 10.2 apply in the semiring setting (i.e., when K is a semiring rather than a ring) without any need for changes; thus, Theorem 4.8 holds over any semiring K at least in the case when one of i and j is 1. Unfortunately, subtraction is used in the proof of Theorem 4.8, and we have so far been unable to excise it from the argument. (With a bit of thought, we can convince ourselves that subtraction is actually unnecessary if p = 2 or q = 2, so the first interesting case is obtained for P = [3] × [3].)

Other posets: conjectures and results
We now proceed to discuss the behavior of R on some other families of posets P . We no longer use the notations introduced in Section 6.

The ∆ and ∇ triangles
When p = q, the p × q-rectangle [p] × [q] becomes a square. By cutting this square in half along its horizontal axis, we obtain two triangles: Definition 13.1. Let p be a positive integer. Define two subsets ∆ (p) and ∇ (p) of the p × p-rectangle [p] × [p] by Each of these two subsets ∆ (p) and ∇ (p) inherits a poset structure from [p] × [p]. In the following, we will consider ∆ (p) and ∇ (p) as posets using these structures.
The Hasse diagrams of these posets ∆ (p) and ∇ (p) look like triangles; if we draw [p] × [p] as agreed in Convention 4.4, then ∆ (p) is the "upper half" of the square [p] × [p], whereas ∇ (p) is the "lower half" of this square. Here, on the other hand, is the Hasse diagram of the poset ∇ (4): (3, 1) (1, 1) .
Note that ∆ (p) = ∅ when p = 1. Computations with SageMath [S + 09] for p = 3 have made us suspect a periodicity-like phenomenon similar to Theorem 4.7: Conjecture 13.3 (periodicity conjecture for ∆-triangle). Let p 2 be an integer. Assume that P is the poset ∆ (p). Let f ∈ K P be a K-labeling such that R p f = ⊥.
Let a = f (0) and b = f (1). Let x ∈ P . We define an element x ′ ∈ P as follows: • If x = 0 or x = 1, then we set x ′ := x.
• Otherwise, we write x in the form x = (i, j), and we set x ′ := (j, i).
Then, a and b are invertible, and we have (R p f ) (x) = ab · f (x ′ ) · ab.
If true, these two conjectures would generalize [GriRob15, Theorem 65], where K is commutative.

The "right half" triangle
We can also cut the square [p] × [p] along its vertical axis:  (1, 1) .
The inequality i k in Definition 13.5 could just as well be replaced by the reverse inequality i k; the resulting poset would be isomorphic to Tria (p). But we have to agree on something. Now, we again suspect a periodicity-like phenomenon: Conjecture 13.7 (periodicity conjecture for "right half" triangle). Let p be a positive integer. Assume that P is the poset Tria (p). Let f ∈ K P be a K-labeling such that R 2p f = ⊥. Let a = f (0) and b = f (1). Let x ∈ P . Then, a and b are invertible, and we have R 2p f (x) = ab · f (x) · ab.
In a sense, we can "almost" prove Conjecture 13.7: Namely, the proof of its commutative case ([GriRob15, Theorem 58]) given in [GriRob15] can be adapted to the case of a general ring K, as long as the number 2 is invertible in K. The latter condition has all the earmarks of a technical assumption that should not matter for the validity of the result; unfortunately, however, we are not aware of a rigorous argument that would allow us to dispose of such an assumption in the noncommutative case.

Trapezoids
Nathan Williams's conjecture [GriRob15, Conjecture 75], too, seems to extend to the noncommutative setting: Conjecture 13.8 (periodicity conjecture for the trapezoid). Let p be an integer > 1. Let s ∈ N. Assume that P is the subposet {(i, k) ∈ [p] × [p] | i + k > p + 1 and i k and k s} of [p]×[p]. Let f ∈ K P be a K-labeling such that R p f = ⊥. Let a = f (0) and b = f (1). Let x ∈ P . Then, a and b are invertible, and we have Again, this has been verified using SageMath for certain values of p and s and some randomly chosen K-labelings with K = Q 3×3 . Even for commutative K, a proof is yet to be found, although significant advances have been recently made (see [Johnso23, Chapter 4] 25 ).

Ill-behaved posets
The above results and conjectures may suggest that every finite poset P for which birational rowmotion R has finite order when K is commutative must also satisfy a similar (if slightly more complicated) property when K is noncommutative. In particular, one might expect that if some positive integer m satisfies R m = id (as rational maps) for all fields K, then R m f = f should also hold for all noncommutative rings K and all K-labelings f ∈ K P that satisfy f (0) = f (1) = 1 (the latter condition ensures, e.g., that the ab and ab factors in Theorem 4.7 can be removed). However, this expectation is foiled by the following example: Example 13.9. Let P be the four-element poset {p, q 1 , q 2 , q 3 } with order relation defined by setting p < q i for each i ∈ {1, 2, 3}. This poset has Hasse diagram r r r r r r r p .
It is known (see [GriRob16,Example 18] or [GriRob16,Corollary 76]) that the birational rowmotion R of this poset P satisfies R 6 = id (as rational maps) if K is a field. In other words, if K is a field, and if f ∈ K P is a K-labeling such that R 6 f = ⊥, then R 6 f = f . But nothing like this holds when K is a noncommutative ring. For instance, if we let K be the matrix ring Q 2×2 , and if we define a K-labeling f ∈ K P by f (0) = I 2 (the identity matrix in K) , f (1) = I 2 , f (p) = I 2 , f (q 1 ) = I 2 , f (q 2 ) = 1 0 0 −1 , f (q 3 ) = 1 1 0 1 , then R m f is distinct from f (and also distinct from ⊥) for all positive integers m.
(See the detailed version of this article for a proof.) Example 13.10. Let P be the four-element poset {p 1 , p 2 , q 1 , q 2 } with order relation defined by setting p i < q j for each i, j. It follows from [GriRob16, Proposition 74 (b) and Proposition 61] that the birational rowmotion R of this poset P satisfies R 6 = id (as rational maps) if K is a field. On the other hand, if K is the matrix ring Q 2×2 , then we can easily find a K-labeling f of P such that R m f = f for all 1 m 10 000 (and probably for all positive m, but we have not verified this formally), despite f (0) and f (1) both being the identity matrix I 2 .

A note on general posets
We finish with some curiosities. While Theorem 4.8 is specific to rectangles, its (i, j) = (1, 1) case can be generalized to arbitrary finite posets P in the following form: Proposition 14.1. Let P be any finite poset. Let f ∈ K P be a labeling of P such that Rf = ⊥. Let a = f (0) and b = f (1). Then, b · u∈ P ; u⋗0 (Rf ) (u) · a = u∈ P ; u⋖1 assuming that the inverses (Rf ) (u) on the left-hand side are well-defined.
Proof. Even though we are not requiring P to be a rectangle, we shall use some of the notations introduced in Section 6. Specifically, we shall use the notation x ℓ defined in (9), the notion of a "path", and the notations A v ℓ , A v ℓ , A p ℓ , A p ℓ , A u→v ℓ and A u→v ℓ defined afterwards. Hence, the equality (58) (which we must prove) can be rewritten as b · u∈ P ; u⋗0 u 1 · a = u∈ P ; u⋖1 u 0 (59) (since u 1 = (Rf ) (u) and u 0 = f (u)). We assume that the inverses (Rf ) (u) on the left-hand side of (58) are well-defined (since the claim of Proposition 14.1 requires this). We furthermore WLOG assume that P = ∅ (since the claim is easily checked otherwise). Using these two assumptions, it is not hard to show that both a and b are invertible. (See the detailed version for a proof.) In Remark 7.7, we have observed that Corollary 7.5, Proposition 7.2 and parts (a) and (b) of Theorem 7.6 hold for our poset P (even though P is not necessarily a rectangle). Now, Theorem 7.6 (a) (applied to ℓ = 1) shows that each u ∈ P satisfies This latter equality also holds for u = 1 (indeed, from 1 1 = b, we obtain b · 1 1 = b · b = 1; but it is easy to prove that A 1→1 1 = 1 as well, and thus we obtain b · 1 1 = 1 = A 1→1 1 ). Therefore, it holds for all u ∈ P ∪ {1}. Hence, in particular, it holds for all u ∈ P satisfying u ⋗ 0. Summing it over all such u, we obtain Multiplying both sides of this equality by a on the right, we obtain b · u∈ P ; u⋗0 However, Theorem 7.6 (b) (applied to ℓ = 0) shows that each u ∈ P satisfies This equality also holds for u = 0 (since 0 0 = a equals A 0→0 0 =1 · a = a). Thus, it holds for all u ∈ P ∪ {0}. In particular, it therefore holds for all u ∈ P satisfying u ⋖ 1. Summing it over all such u, we obtain u∈ P ; u⋖1 Comparing this with (61), we obtain b · u∈ P ; u⋗0 u 1 · a =