How I solved Mega Cube — Part 2 (Optimize Time Complexity)

Mohsen Saghafi
2 min readApr 26, 2021

--

<- First Part

When I found another peace of mind to continue working on this puzzle again, I have started by profiling the time consumption for the different parts of the A* code. The first part that popped up in the result was the check for explored states.

Even an experienced software engineer can make that mistake. Despite the same python operator in, the time complexity of the operator is different depending on the data structure that it applies. If the data structure is a List, it will execute in O(n), which was the case here. However, if it applies to Set or Dictionary, it becomes O(1). Huge improvement by a very simple change.

After the first fix, I have run the code again. Within 5 minutes the second performance issue point was detected. To find the best result, A* needs to sort the frontier list on every round of the loop. Sort is not very performance-friendly when it comes to arrays/lists with huge sizes. In our case, the frontier can easily reach billions of items. The better solution was using heapq in python. That helped to insert new items and remove items in O(log(n)) time complexity. Therefore, the sorting part is not needed anymore as heapq always remains sorted.

Implementing these 2 small tweaks improved the performance dramatically. Within 10 to 15 minutes of running time, the new code processed the same amount of possible states (items in the frontier) as the last couple of weeks.

This improvement provided me with some hopes that this time the final solution will be achieved soon. But after 35 to 40 minutes, my laptop ran out of memory and the process was killed by the OS. Soon I realized that the problem is far from solved.

Guess solution

My laptop might have a limited amount of RAM but I am sure that the VMs in Cloud do not. The first thought that came to my mind was to rent a Compute Engine from GCP. I have loaded my code in the machine and installed the necessary library to run the code, and BAM. the result was the same, this time after 30 minutes. I had at least 6 times more RAM now. But, this also didn’t help.

This failure revealed that I have some memory optimization practice to do…

Next Part ->

--

--