I made a method that would do alpha beta pruning on a tree. However there is a time limit on a single move of 20 seconds. I am to play against a player and his opponent method. I tried going down 6 levels deep but I would lose. I tried going down 7 levels and I was taking too long.

Is there a way to make my code run faster? If possible can you also check my logic for the alpha-beta pruning part and see if I missed anything causing it to run longer that it should. I have included `iterative deepening`

method that calls alpga-beta pruning method.

I want it to run faster so I can go deeper down the tree so I have an advantage and win the game. The game is won if any of the opponent queen (there are 2 queens for each player) has no place left to move on a block of 7X7

iterative deepening:

`def iterative_deepening_alpha_beta(self, game, time_left, depth, alpha=float("-inf"), beta=float("inf"), maximizing_player=True): current_depth = 0 best_move_q_1 = (-1, -1) best_move_q_2 = (-2, -3) best_score = float("-inf") timeout = False while not timeout: try: # self.currentDepth = current_depth m_q_1, m_q_2, score = self.alphabeta( game, time_left, current_depth) best_score = score best_move_q_1 = m_q_1 best_move_q_2 = m_q_2 current_depth += 1 except Timeout: timeout = True self.moveCache = {} self.sorted_cache = {} self.sorted_list_m1 = [] self.sorted_list_m2 = [] # self.currentDepth = 1 print('Iterative deepening ... Timeout approaching', current_depth) break print(best_move_q_1, best_move_q_2, best_score, ' <<<CS>>>') return best_move_q_1, best_move_q_2, best_score `

alpha-beta pruning:

`def alphabeta(self, game, time_left, depth, alpha=float("-inf"), beta=float("inf"), maximizing_player=True): """Implementation of the alphabeta algorithm Args: game (Board): A board and game state. time_left (function): Used to determine time left before timeout depth: Used to track how deep you are in the search tree alpha (float): Alpha value for pruning beta (float): Beta value for pruning maximizing_player (bool): True if maximizing player is active. Returns: (tuple,tuple, int): best_move_queen1,best_move_queen2, val if depth == 0 or self.is_terminal_state(game): """ if time_left() < 24: raise Timeout moves_1 = game.get_legal_moves_of_queen1() moves_2 = game.get_legal_moves_of_queen2() if depth == 0 or self.is_terminal_state(game): utility_val = CustomEvalFn().score(game, maximizing_player) return None, None, utility_val overall_best_move_q_1 = (-1, -1) overall_best_move_q_2 = (-2, -3) highest_move_diff = float( "-inf") if maximizing_player else float("inf") from_cache = False if depth == 1 and depth in self.moveCache: moves_1 = [] moves_2 = [] if depth in self.sorted_cache: self.moveCache[depth] = self.sorted_cache[depth] else: self.moveCache[depth] = sorted( self.moveCache[depth], key=lambda k: k['score'], reverse=True) self.sorted_cache[depth] = self.moveCache[depth] if len(self.sorted_list_m1) != 0 and len(self.sorted_list_m2) != 0: moves_1 = self.sorted_list_m1 moves_2 = self.sorted_list_m2 from_cache = True else: for m in self.moveCache[depth]: moves_1.append(m["m1"]) moves_2.append(m["m2"]) self.sorted_list_m1 = moves_1 self.sorted_list_m2 = moves_2 if not from_cache: random.shuffle(moves_1) random.shuffle(moves_2) for move_q_1 in moves_1: for move_q_2 in moves_2: if(move_q_1 != move_q_2): possible_game_state = game.forecast_move( move_q_1, move_q_2) m1, m2, score = self.alphabeta( possible_game_state, time_left, depth - 1, alpha, beta, not maximizing_player) if depth == 1: if depth in self.moveCache: self.moveCache[depth].append({ 'score': score, "m1": move_q_1, "m2": move_q_2 }) else: self.moveCache[depth] = [{ 'score': score, "m1": move_q_1, "m2": move_q_2 }] if maximizing_player: if score > highest_move_diff: highest_move_diff = score overall_best_move_q_1 = move_q_1 overall_best_move_q_2 = move_q_2 if highest_move_diff >= beta: return move_q_1, move_q_2, score alpha = max(alpha, score) else: if score < highest_move_diff: highest_move_diff = score overall_best_move_q_1 = move_q_1 overall_best_move_q_2 = move_q_2 if highest_move_diff <= alpha: return move_q_1, move_q_2, score beta = min(beta, score) return overall_best_move_q_1, overall_best_move_q_2, highest_move_diff `