Monte Carlo Tree search (MCTS) is a popular method of choice for addressing the problem of a strong computer based game playing agent in Artificial Intelligence, without any prior domain knowledge. The strongest and most popular algorithms used to tackle the so-called exploration vs. exploitation dilemma in Multi-armed Bandit (MAB) problems were identified and presented in a literature review. Empirical studies measuring the performance of Thompson sampling (TS) and the state-of-the-art Upper Confidence Bound (UCB) approach in the classical MAB problem have been found, results of which support our modified tree policy in MCTS. The domain of application is the board game of the Settlers of Catan (SoC), which is implemented as a multi-agent environment in the programming language C, along with a MCTS-UCT agent, MCTS-TS agent and two strategy playing agents, namely the ore-grain and wood-clay agent. Performance measurements of the aforementioned agents, presented and discussed in this work, demonstrate an increase in the performance of the agent with the modified tree policy, when compared to the state-of-the-art approach (UCT).