This exploratory study investigated how the combination of top-down and bottom-up processing influences decision-making for high and low trusters using the ultimatum game against a computer agent. We designed an experiment wherein (1) participants expected their partners to be humans or agents (top-down processes) and (2) agents used one of four different types of algorithmic behavior (bottom-up processes) to propose and respond. We found that high trusters made fairer decisions in the human condition than in the agent condition in the proposal phase, when opponents’ behaviors were not ambiguous but intentional. In the response phase, the higher the level of trust, the more likely they were to avoid unfairness for an opponent that proposed a distribution amount approved by the participant. Results suggest that, in interpersonal communication, high trusters flexibly use both types of cognitive processing to economically process information when developing representations of others and deciding on a response.