# Publications

A fixed set of n agents share a random object: the distribution μ of the profile of utilities is IID across periods, but arbitrary across agents. We consider a class of online division rules that learn the realized utility profile, and only know from μ the individual expected utilities. They have no record from past realized utilities, and do not know either if and how many new objects will appear in the future. We call such rules prior-independent.

A rule is fair if each agent, ex ante, expects at least 1/n-th of his utility for the object if it is a good, at most 1/n-th of his disutility for it if it is a bad. Among fair prior-independent rules to divide goods (bads) we uncover those collecting the largest (lowest) total expected (dis)utility. There is exactly one fair rule for bads that is optimal in this sense. But for goods, the set of optimal fair rules is one dimensional. Both in the worst case and in the asymptotic sense, our optimal rules perform much better than the natural Proportional rule (for goods or for bads), and not much worse than the optimal fair prior-dependent rule that knows the full distribution μ in addition to realized utilities.

Using a simplified multistage bidding model with asymmetrically informed agents, De Meyer and Saley [17] demonstrated an idea of endogenous origin of the Brownian component in the evolution of prices on stock markets: random price fluctuations may be caused by strategic randomization of “insiders.” The model is reduced to a repeated game with incomplete information. This paper presents a survey of numerous researches inspired by the pioneering publication of De Meyer and Saley.

We compare the Egalitarian rule (aka Egalitarian Equivalent) and the Competitive rule (aka Competitive Equilibrium with Equal Incomes) to divide bads (chores). They are both welfarist: the competitive disutility profile(s) are the critical points of their Nash product on the set of efficient feasible profiles. The C rule is Envy Free, Maskin Monotonic, and has better incentives properties than the E rule. But, unlike the E rule, it can be wildly multivalued, admits no selection continuous in the utility and endowment parameters, and is harder to compute. Thus in the division of bads, unlike that of goods, no rule normatively dominates the other.

The Gibbard–Satterthwaite theorem is a cornerstone of social choice theory, stating that an onto social choice function cannot be both strategy-proof and non-dictatorial if the number of alternatives is at least three. The Duggan–Schwartz theorem proves an analogue in the case of set-valued elections: if the function is onto with respect to singletons, and can be manipulated by neither an optimist nor a pessimist, it must have a weak dictator. However, the assumption that the function is onto with respect to singletons makes the Duggan–Schwartz theorem inapplicable to elections which necessarily select multiple winners. In this paper we make a start on this problem by considering rules which always elect exactly two winners (such as the consulship of ancient Rome). We establish that if such a *consular election rule* cannot be expressed as the union of two disjoint social choice functions, then strategy-proofness implies the existence of a dictator. Although we suspect that a similar result holds for *k*-winner rules for k>2k>2, there appear to be many obstacles to proving it, which we discuss in detail.

The land area covered by freely available very high-resolution (VHR) imagery has grown dramatically over recent years, which has considerable relevance for forest observation and monitoring. For example, it is possible to recognize and extract a number of features related to forest type, forest management, degradation and disturbance using VHR imagery. Moreover, time series of medium-to-high-resolution imagery such as MODIS, Landsat or Sentinel has allowed for monitoring of parameters related to forest cover change. Although automatic classification is used regularly to monitor forests using medium-resolution imagery, VHR imagery and changes in web-based technology have opened up new possibilities for the role of visual interpretation in forest observation. Visual interpretation of VHR is typically employed to provide training and/or validation data for other remote sensing-based techniques or to derive statistics directly on forest cover/forest cover change over large regions. Hence, this paper reviews the state of the art in tools designed for visual interpretation of VHR, including Geo-Wiki, LACO-Wiki and Collect Earth as well as issues related to interpretation of VHR imagery and approaches to quality assurance. We have also listed a number of success stories where visual interpretation plays a crucial role, including a global forest mask harmonized with FAO FRA country statistics; estimation of dryland forest area; quantification of deforestation; national reporting to the UNFCCC; and drivers of forest change.

Global gridded crop models (GGCMs) are essential tools for estimating agricultural crop yields and externalities at large scales, typically at coarse spatial resolutions. Higher resolution estimates are required for robust agricultural assessments at regional and local scales, where the applicability of GGCMs is often limited by low data availability and high computational demand. An approach to bridge this gap is the application of meta-models trained on GGCM output data to covariates of high spatial resolution. In this study, we explore two machine learning approaches – extreme gradient boosting and random forests - to develop meta-models for the prediction of crop model outputs at fine spatial resolutions. Machine learning algorithms are trained on global scale maize simulations of a GGCM and exemplary applied to the extent of Mexico at a finer spatial resolution. Results show very high accuracy with R2>0.96 for predictions of maize yields as well as the hydrologic externalities evapotranspiration and crop available water with also low mean bias in all cases. While limited sets of covariates such as annual climate data alone provide satisfactory results already, a comprehensive set of predictors covering annual, growing season, and monthly climate data is required to obtain high performance in reproducing climate-driven inter-annual crop yield variability. The findings presented herein provide a first proof of concept that machine learning methods are highly suitable for building crop meta-models for spatio-temporal downscaling and indicate potential for further developments towards scalable crop model emulators.

A tournament can be represented as a set of candidates and the results from pairwise comparisons of the candidates. In our setting, candidates may form coalitions. The candidates can choose to fix who wins the pairwise comparisons within their coalition. A coalition is winning if it can guarantee that a candidate from this coalition will win each pairwise comparison. This approach divides all coalitions into two groups and is, hence, a simple game. We show that each minimal winning coalition consists of a certain uncovered candidate and its dominators. We then apply solution concepts developed for simple games and consider the desirability relation and the power indices which preserve this relation. The tournament solution, defined as the maximal elements of the desirability relation, is a good way to select the strongest candidates. The Shapley–Shubik index, the Penrose–Banzhaf index, and the nucleolus are used to measure the power of the candidates. We also extend this approach to the case of weak tournaments.

We discuss generalizations of Rubio de Francia’s inequality for Triebel–Lizorkin and Besov spaces, continuing the research from Osipov (Sb Math 205(7): 1004–1023, 2014) and answering Havin’s question to one of the authors. Two versions of Rubio de Francia’s operator are discussed: it is shown that exponential factors are needed for the boundedness of the operator in some smooth spaces while they are not essential in other spaces. We study the operators on some “end” spaces of the Triebel–Lizorkin scale and then use usual interpolation methods.

Matrix games with incomplete information on both sides and public signal on the state of game represented by random binary code of fixed length are considered. Players are computationally bounded and are only able to play strategies to finite automata of different sizes: m for Player 1 and n for Player 2 where m ≫ n. We obtain a lower bound for m and an upper bound for n which may turn the original game with incomplete information for both players into a game with incomplete information for Player 2.

This research is motivated by sustainability problems of oil palm expansion. Fast-growing industrial Oil Palm Plantations (OPPs) in the tropical belt of Africa, Southeast Asia and parts of Brazil lead to significant loss of rainforest and contribute to the global warming by the corresponding decrease of carbon dioxide absorption. We propose a novel approach to monitoring of the expansion of OPPs based on an application of state-of-the-art Fully Convolutional Neural Networks (FCNs) to solve Semantic Segmentation Problem for Landsat imagery. The proposed approach significantly outperforms per-pixel classification methods based on Random Forest using texture features, NDVI, and all Landsat bands. Moreover, the trained FCN is robust to spatial and temporal shifts of input data. The paper provides a proof of concept that FCNs as semi-automated methods enable OPPs mapping of entire countries and may serve for yearly detection of oil palm expansion.

This paper analyzes bankruptcy games with nontransferable utility as a generalization of bankruptcy games with monetary payoffs. Following the game theoretic approach to NTU-bankruptcy problems, we study some appropriate properties and the core of NTU-bankruptcy games. Generalizing the core cover and the reasonable set to the class of NTU-games, we show that NTU-bankruptcy games are compromise stable and reasonable stable. Moreover, we derive a necessary and sufficient condition for an NTU-bankruptcy rule to be game theoretic.

This paper takes an axiomatic bargaining approach to bankruptcy problems with nontransferable utility by characterizing bankruptcy rules in terms of properties from bargaining theory. In particular, we derive new axiomatic characterizations of the proportional rule, the truncated proportional rule, and the constrained relative equal awards rule using properties which concern changes in the estate or the claims.

This work is devoted to study cooperative solutions in the games with Looking Forward Approach. LFA is used for constructing game theoretical models and defining solutions for conflict-controlled processes where information about the process updates dynamically. We suppose that during the game players lack certain information about the motion equation and payoff function. At each instant players possess only the truncated information about the game structure. At a given instants information about the game updates, players receive new updated information and adapt. Described model cannot be formulized using classical differential game technics. The new resulting cooperative solution for LFA models is presented and studied

We consider random public signals on the state of two-person zero-sum game with incomplete information on both sides (both players do not know the state of the game). To learn the state, each player chooses a finite automaton which receives the public signal; the player only sees the output of the automaton chosen. Supposing that the size of automata available to Player 1 is essentially bigger than that available to Player 2, we give an example of public signal with random length of output strings where the posterior belief of Player 1 is the state and the posterior belief of Player 2 is close to his original belief. Thus, we demonstrate that asymmetric information about the state of a game may appear not only due to a private signal but as a result of a public signal and asymmetric computational resources of players.Besides, for a class of random signals with fixed length of output strings, we estimate the fraction of signals such that some automaton of given size may help Player 2 to significantly reestimate prior probability of the state. We show that this fraction is negligible if the size of automata of Player 2 is sufficiently smaller than length of output strings.