# Solving LinkedIn Queens If you don't know, LinkedIn has its own puzzle games, similar to the New York Times Wordle/Crossword. One of these is a variation of NQueens, with 2 main differences: 1. The queens don't invalidate diagonals when placed, but instead their immediate surroundings (like a king in chess) 2. The board is split into "sections", where each section can have only one queen. An example of a solved board is below: ![[Pasted image 20251227171305.png]] And an unsolved board: ![[Pasted image 20251227171443.png]] I decided to create a solver for this. This has 3 main problems: 1. "Scraping" the board state from linkedin.com/games/queens 2. Solving the board once we have it in a state we can actually use (e.g. an array) 3. Writing the solution back to the website #### The Solver As this is very similar to the NQueens leetcode problem, my solution for the solving portion was just to use a naive backtracking algorithm. We have to modify the algorithm to account for the different "sections" of the board, a.k.a. the colors. To keep track of this, we serialize the board in this structure: ``` self.board = [[None for _ in range(self.COLS)] for _ in range(self.ROWS)] ``` We later fill each cell with a Tile class, seen here: ``` @dataclass class Tile: section: "Section" has_queen: bool = False def __repr__(self): return f"{self.has_queen}" ``` The sections also each have their own class, seen here: ``` class Section: def __init__(self, color): self.color = color self.tiles = set() self.contains_queen = False ``` As we read the board state, we create Tile objects and link them with sections. It's also important that we read the given queens on the board, as the game will typically give us 1-2 "correct" queens when we start. After we fetch the board state, we can run the solving algorithm. The algorithm is pretty much identical to the NQueens editorial on leetcode, with a different base case. This is the success case. We guarantee that, if we place a queen, it is valid. So, if every section has a queen placed, then we have found our solution. ``` if all([section.contains_queen for section in self.sections]): print("found solution") return True ``` As we are trying to find a queen to place, we add these: ``` or self.has_adjacent_queen(row, col) or self.board[row][col].section.contains_queen ``` to satisfy the variations in Linkedin NQueens. I have ran this with a larger board (10x10 is the largest I've seen for the game), and it still seems to run in about ~1s, which was surprising to me initially. This is an O(n!), with n being the row count. There is some other operations we do (iterating, hashmap read/writes, etc) but the main time sink will come from the backtracking itself. #### Optimization Currently, I have not optimized the code at all. I was satisfied with achieving 1s solve time on the actual game, and moved on to parts 1 and 3 of the solver (the engine was built first). However, I would still really like to optimize it, even if there will be no difference in the time shown on Linkedin (I don't think I can get it under 1s due to the overhead of steps 1/3, but I haven't looked into optimizing that yet). ##### **Static Analysis** Backtracking is an expensive algorithm. If you've played LinkedIn games before, you know there are some rules you can use to eliminate squares where you know we _can't_ place a queen. Some examples are [here](https://www.reddit.com/r/LinkedInTips/comments/1fwr251/linkedin_queens_a_list_of_strategies/). For example, ![[Pasted image 20251227174608.png]] Look at the blue section. No matter if we place the queen in the 2nd or 4th column, that entire row of squares will be eliminated from contention, we won't be able to place a queen there. Ideally, we want to embed these ideas into the algorithm, so we spend less time backtracking (expensive) and more time finding freebies. I would like to implement these rules into our algorithm every time we backtrack, we run this "static analysis," to see what queens we can place automatically without backtracking through other positions. ##### **Is the game already solved?** I'm pretty sure - that if we cover enough of these "rules" that would be included in static analysis - that we shouldn't need backtracking at all. We can just constantly loop run rules -> place queen based off eliminated positions -> run rules, until we have placed every queen. Just running the rules *should* give us enough information to see where each queen should be placed. ##### **Rewrite in Rust** The game engine is currently in Python, which is a slow language. It would be best to rewrite it in Rust, which on its own shouldn't be too difficult, but it will change the interop with steps 1/3 (described later). At a high level, the scraper frontend is in python, and it constructs a python array for the board state. We can't pass a python array directly to rust. There are 2 ways to fix this: 1. Find some crate for rust/python FFI. We need to call the "solve game" rust function from Python, and Rust will also need to send the solved board back to Python when it is complete. 2. Have the Rust and Python programs running at the same time. When the Python side fetches the board state, it writes it to a file, and it sends a notification to the Rust program to read the file and solve the board (or the Rust program can just poll the file, but we need to know when the python program is actually done writing). Rust now solves the board, and does the same for Python (write the result to a file, notify Python it's complete) Either way, we need to create some binary serialization for the board state, as again, we can't just directly pass a Python array to our Rust program, or vice versa. Alternatively, I could just use a Rust browser interaction framework, but I'm not as famililar with those as I am with Playwright/Selenium. #### Fetching and Writing Board State Playwright is used to actually interact with the browser, including getting the board and writing the result. We simply send ourselves to linkedin.com/games/queens, read the HTML and parse out the board size and state of each tile (so we know the color and if it has a queen), and read it into a python array. This looks something like this: ``` def get_tiles(self): frame = self.page.frame_locator("iframe[title='games']") queens_grid = frame.locator("#queens-grid") children = queens_grid.locator(":scope > div") board, tile_pointers, curr_row = [[]], [[]], 1 for child in children.all(): label = child.get_attribute("aria-label") if not label: continue label = label.split(" ") has_queen = label[0] == "Queen" color = self.extract_color(label) label_row = int(label[label.index("row") + 1].rstrip(",")) if label_row == curr_row + 1: board.append([]) tile_pointers.append([]) curr_row += 1 board[-1].append((has_queen, color)) tile_pointers[-1].append(child) return board, tile_pointers ``` We then pass this to our Queens solver engine, which returns to us the completed board. We save the tile pointers, and iterate over the solved board. If solved_board at i, j has a queen, then we call doubleclick() on tile_pointers at i, j. ``` def write_solution(self, board, tile_pointers, starting_queens): for i in range(len(board)): for j in range(len(board[i])): if board[i][j].has_queen and (i, j) not in starting_queens: tile_pointers[i][j].dblclick() ``` ##### **Really Annoying Problems** I used 3 accounts to test this logic. 1. Guest mode, just opening the link in an incognito tab 2. A throwaway account 3. My main LinkedIn account All 3 of these would render the game board HTML differently. In particular: 1. The guest account board was located inside an iframe 2. The throwaway account board had no iframes, it was just in the html directly 3. My main LinkedIn account was in a very weird state. When I downloaded the HTML, I had no idea where the gameboard was located. It was just a ~50 line JavaScript file, with one VERY long line (a couple megabytes) that seemed to contain a lot of HTML. After randomly searching through this line, I found what looked like to be the game board HTML, and was able to parse it out. The third scenario took me a while to debug - and I did eventually fix it, finally getting me a ~1s Queens time, putting me at the top of my LinkedIn leaderboard - just to get put on a different A/B test the very next day. This time, I genuinely could not get the board out of the HTML, it was hidden in a very weird way. When I continue work on the solver, I will focus on the engine itself, as fixing the web scraping logic is quite a headache. #### Future Work I want to highly optimize the actual algorithm. I will need to catalog the rules that I need to embed into the solver. Once I get that done, I can work on optimizations not specific to the algorithm, such as SIMD, data structure optimizations, etc.