-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance idea: Partition before executing current uniform indentation search #118
Comments
Partitioning is very difficult to do. In fact that was originally my idea for the search algorithm, but I never got it to work. So here's my new buzzword: Pool testing. In the same way that a classroom might want to test 30 children with 1 covid test kit by testing all of them in the same pool and then going back to re-test classes that come up positive we can check for syntax errors. Roughly we can do this with syntax search by: Cheaply but inaccurately building all nodes in the syntax tree using our existing logic (no partitioning needed, battle tested logic design). Instead of testing a small chunk of code to see if it's invalid, test many at the same time in one pass. We then iteratively repeat the process until we narrow down a small subset of code. Depending on how well the process works, it may replace the existing search. If it's not as accurate we can still use this process to narrow down our search to a smaller area and then pass it into the original algorithm that is slower but has better results. Pool testing: Block generationOne example could be, building a tree of possible blocks, then testing all blocks with the same indentation level at the same time rather than individually. The hard part is that we need to minimize bookkeeping or the building of the tree for large files is too slow (even without any calls to Ripper). The existing frontier/search algorithm keeps track of lines that are captured and those that are not. It also checks for when one block consumes/captures another block. These are both expensive checks. Pyramids of powerOne way to look at code when you squint is to see sideways pyramids:
Inside of that "pyramid" there are many peaks. If we pick one peak, turn it into a block and iteratively expand it we will capture everything down to indentation 0. We can then break up the source code and repeat the process above and below the "pyramid". We can then search via asking "which pyramid is invalid or not". Once we find the invalid pyramids, we can then partition via indentation level by asking "at what indentation level does this begin to be invalid"? Once we know the indentation level where it starts we can then segment and look at individual blocks. Segment/split existing blocksA challenge to this approach is that blocks become "expanded" and will capture the same content multiple times. For example:
expands to:
expands to:
Expands to:
So essentially EVERY block that is expanded in this pyramid will contain the initial line:
That same problem is repeated at larger increments/chunks. At every step of the expansion process, we may capture an entire method up and down. For example:
Expands to:
My theory is that I can use this structure to my advantage. I know that for a given indentation level, the block with the most number of lines contains ALL other code of that line number or greater (within a given pyramid). By sorting them I can apply a However, I can repeat the process and see which sub-blocks within that block are valid. Going from the above example if I detect the problem is here:
Then working backward I would be able to see that the internal block from the prior expansion is valid:
Using this info I could "split" the known large bad block, by removing the known good internal small block. Summary
Once we've narrowed down the problem to N small chunks we can call the very expensive document syntax check N times. If we're lucky this process will give us roughly the same results as the existing algorithm and we can be done. If we're unlucky, we can feed these smaller sections into the existing code search to give us decent results. Recap the recap: Optimize an algorithm around building a tree structure as quickly as possible without calling Ripper. Identify ways we can pool together code in our tree to test large chunks at a time to minimize Ripper calls. Use this info to either narrow down our search space or complete the entire search. |
Experimental work for a "tree" search described in #118 (comment) (I'm using "tree" to describe the "pyramid" concept in that post). The short version of the concept is this: Recursively build a tree structure that is sorted in such a way that I can call `bsearch` to reduce/eliminate calls to Ripper. To do this I recursively build an individual "tree" which **should** break a document up into a large logical code block. Then I repeat the process for the code above and below that tree. Once done I can iteratively determine which trees hold syntax errors to focus my search. Then I can use the tree structure to isolate the indentation level (can use bsearch and Ripper) as well as find and split an invalid code block from that indentation level (also using bsearch and Ripper). In all it reduces time to find syntax errors on the 9k long line file from 1.6s to 0.61s (roughly in a third). The biggest challenge is that the old algorithm essentially had some redundency if the blocks are not created correctly. This algorithm is much more sensitive and I've not found a way to back out of a bad/incorrect search plan yet, though that may come with time. This commit breaks a lot of existing tests, it's mostly meant as a trial WIP to validate different concepts.
Here's a spike of that idea (on the branch schneems/partition-inside-out) e3c6c9b It works for the given case but fails on others |
That didn't work https://github.com/zombocom/dead_end/tree/schneems/block-node-attempt-2 does seem to hold though. It doesn't use any of the weird-ish new ideas here. Instead, it doubles down on existing basics by re-writing the whole algorithm from the ground up. Improve the quality of block constructionThe more accurately we can build code blocks the less we have to lean on Ripper to confirm/evaluate our code blocks. This helps because parsing/validating code is expensive.
Block creation/evaluation decouplingAs we evaluate blocks to see if they hold a syntax error and then see if they hold THE source of all syntax errors we're having to call the parser, a lot.
With the old method, a tree was never actually built. Instead, groups of lines would be identified as a "block" and then it would be evaluated. If it was valid source code it would be "hidden" and the process would recursively repeat. This worked (quite well) however the rules for this evaluation and expansion were difficult to understand and modify. Even though I wrote them, it's hard to understand the second and third-order effects of changing the old algorithm. Beyond usability, this outside-in parsing means we somewhat assume all code has an equal chance of containing the syntax error and we must validate high indentation code is valid before we can move in to look at low indentation code. Due to improvements from block construction we now have stronger guarantees while evaluating blocks. Instead of starting at the outer indentation and working in, we can go the other direction. We start with the root which contains the entire document. It is split into several "blocks". We can then apply inspection to follow bad/invalid source code to it's ultimate cause instead of having to recursively evaluate ALL the valid source code as well. One way to think of it is: Before we were trying to eliminate all the valid code to find the invalid code. Now we can instead "follow" the invalid code to find its ultimate source. Normalize states in building and evaluation
This is somewhat of a quality of life improvement from a development perspective. Now it's clearer how the various expansion rules interact as there is only one set of rules. Because all containers are normalized the container can have specialized information about itself (specifically, how will it expand on its next step). Normalize intersection operations
Better prioritizationIn other experiments, I noticed that the existing "frontier" priority queue logic didn't behave as expected. Specifically, this code wouldn't be evaluated in an optimal order:
The reason the current algorithm doesn't do the right thing is it only looks at the current indentation of a code block. I found that if I got better information to my algorithm I could reliably predict what the "next indentation" would be after an expansion. I then baked that information into the expansion process by first sorting by "next indent", then "current indent", then falling back to line number. The old algorithm really resisted adding this information in as I had made other tweaks to different parts of the algorithm to somewhat course-correct for a bad expansion that no longer held true when I fixed how nodes expanded. Also because the old process searches and evaluates at the same time it's impossible to tell what the next indentation will be before the prior blocks have been executed. Based on the above optimizations I was able to have strong confidence in my ability to predict the next indentation level in the new algorithm. Better code "scanning" via storing block nodes as a doubly linked list
The combination of all these properties on BlockNode turn it from a tree into a graph data structure. Once built, the structure provides a high-level representation of the domain that can be manipulated with comparative ease. If you consider that valid syntax can be represented as a tree (AST) it seems a logical progression that a graph if a syntax error is caused due to ambiguity that a data structure to hold the invalid syntax would need to have more data than a valid tree. So it somewhat makes sense that a graph or graph-like data structure would be a good choice to represent invalid source code. End resultsI'm still working on it, but the improvements needed to all come together since they all impact one another. I've experimented with each of these in isolation, but since the old algorithm was more "grown" than written with intention it has a lot of coupled logic. With the new insights, I started from scratch with some test cases I knew were difficult and somewhat contradictory. I slowly built up data structures that had properties I wanted and then eventually fed them into my test cases. From there I tweaked the new algorithm in much the same way I developed the old algorithm, however with more tools, understanding, and experience. |
The "left/right" balanced heuristic proved to be to unstable to be used as a block expansion by itself. I found that while I could produce better blocks (valid source code) the old algorithm was optimized to work with itself and not with a better input. Instead of experimenting more with a different expansion algorithm I want to take a new direction and re-write search from the ground up. The idea is this: We need a tree we can search. The more time spent building a higher quality tree the better and more accurate the actual exploration of the tree will be. I've experimented with tree construction before. I know that I want nodes to be able to handle intersection/capture with many other nodes. I know that I want all valid source code to be built into logically separate "chunks" previously I've described these as "pyramids" as they stick out when viewed from the side #118 (comment). This work was done in an exploratory fashion so I'm going back and annotating some of my prior commits to flesh out the notes. Most of these were intended to be refactored out later and are perhaps less "professional" than they would be otherwise. I've decided to leave them in here where their inclusion helps to highlight what guided my decision making process as I was writing the code. I believe that will be more valuable in the long run than deleting all the notes and refactoring into one single large commit (which I usually prefer for open source feature work). All that's to say: I know some of these commits and their messages aren't great, just roll with it, the end result is worth it. ## Goal: Indentation focused blocks - A block's inner are comprised of sub-blocks Ideally I want these to be two separate blocks (certainly not 5 separate lines, that's useless) ``` class Zoo def foo print def bar print end end ``` If we can't do that, then it may be okay to build the (incorrect) match as long as we can determine at what indent level the problem was created, then we can work backwards if "class zoo" is implicated, we know it's missing an end and reverse scan up ## Thinking here Desired properties: - I think we can somehow leverage data about indentation to inform the blocks, We've got two cases we need to handle at the same time: ``` print "one" print "one" print "one" ``` And ``` def one print "one" end def two print "one" end def three Print "three end ``` Both should produce 3 inner "blocks". Since we're operating only at the level of one block at a time, how do we figure out how to generate the "inner". - One option: If we scan ahead to find all blocks we want to join together then we can make one block with ALL of those other sub blocks. - I like that - I want large blocks that can be "zoomed" into to expose the "inner" code. Next task is to look through all existing examples and build demo blocks, develop a feel for what's useful and what isn't then figure out if we can find patterns
The "left/right" balanced heuristic proved to be to unstable to be used as a block expansion by itself. I found that while I could produce better blocks (valid source code) the old algorithm was optimized to work with itself and not with a better input. Instead of experimenting more with a different expansion algorithm I want to take a new direction and re-write search from the ground up. The idea is this: We need a tree we can search. The more time spent building a higher quality tree the better and more accurate the actual exploration of the tree will be. I've experimented with tree construction before. I know that I want nodes to be able to handle intersection/capture with many other nodes. I know that I want all valid source code to be built into logically separate "chunks" previously I've described these as "pyramids" as they stick out when viewed from the side #118 (comment). This work was done in an exploratory fashion so I'm going back and annotating some of my prior commits to flesh out the notes. Most of these were intended to be refactored out later and are perhaps less "professional" than they would be otherwise. I've decided to leave them in here where their inclusion helps to highlight what guided my decision making process as I was writing the code. I believe that will be more valuable in the long run than deleting all the notes and refactoring into one single large commit (which I usually prefer for open source feature work). All that's to say: I know some of these commits and their messages aren't great, just roll with it, the end result is worth it. ## Goal: Indentation focused blocks - A block's inner are comprised of sub-blocks Ideally I want these to be two separate blocks (certainly not 5 separate lines, that's useless) ``` class Zoo def foo print def bar print end end ``` If we can't do that, then it may be okay to build the (incorrect) match as long as we can determine at what indent level the problem was created, then we can work backwards if "class zoo" is implicated, we know it's missing an end and reverse scan up ## Thinking here Desired properties: - I think we can somehow leverage data about indentation to inform the blocks, We've got two cases we need to handle at the same time: ``` print "one" print "one" print "one" ``` And ``` def one print "one" end def two print "one" end def three Print "three end ``` Both should produce 3 inner "blocks". Since we're operating only at the level of one block at a time, how do we figure out how to generate the "inner". - One option: If we scan ahead to find all blocks we want to join together then we can make one block with ALL of those other sub blocks. - I like that - I want large blocks that can be "zoomed" into to expose the "inner" code. Next task is to look through all existing examples and build demo blocks, develop a feel for what's useful and what isn't then figure out if we can find patterns
This is a failing test: https://github.com/zombocom/dead_end/tree/schneems/partition. The file is nine thousand lines and it takes a tad over 1 second to parse which means it hits the timeout.
We are already fairly well optimized for the current algorithm so to be able to handle arbitrarily large-sized files we will need a different strategy.
The current algorithm takes relatively small steps in the interest in producing a good end result. That takes a long time.
Here's my general idea: We can split up the file into multiple large chunks before running the current fine-grained algorithm. At a high level: split up the file into 2 parts and see which holds the syntax error. If we can isolate the problem to only half the file then we've dropped processing time in half (relatively). We can run this partition step a few times.
The catch is that some files (such as the one in the failing test cannot be split without introducing a syntax error (since it starts with a class declaration and ends with an
end
). To account for this we will need to split in a way that's lexically aware.For example on that file, I think the algorithm would determine that it can't do much with indentation 0 so it would have to go to the next indentation, there it could see there are N chunks of kw/end pairs, it could divide into N/2 and see if one of those sections holds all of the syntax errors. We could perform this division several times to arrive at a subset of the larger problem, then run the original search array on it.
The challenge is, that we will essentially need to build an inverse of the existing algorithm. Instead of starting with a single line and expanding towards indentation zero, we'll start with all the lines and reduce towards indentation max.
The expensive part is checking code is valid via
Ripper.parse
, sub dividing large files into smaller files can help us isolate problems sections with fewer parse calls, but we've got to make sure the results are as good.An alternative idea would be to use the existing search/expansion logic to perform more expansions until a set of N blocks are generated then check all of them at once. Then once the document problem is isolated, go back and re-parse only the N blocks with the existing. Algorithm. (Basically the same idea as partitioning, but we're working from the same direction as the current algorithm, just taking larger steps (which means fewer Ripper.parse) calls. However we would still need a way to sub-divide the blocks with this process in the terminal case that the syntax error is on indentation zero and the document is massive and all within one kw/end pair.
The text was updated successfully, but these errors were encountered: