Question 1: During which phase of the software testing process would points-to analysis be useful, and how would it be used? Answer: - The parameterized Object Sensitivity for Points-to Analysis for java is a static analysis. This kind of analysis is used to test the program without executing it. It is generally used to find the bug in the program while the code for a project is developed. If you have some code written, this analysis can trace if there are some bugs in the first place or some leak of memory. It helps to test and verify the program in its developmental stage and can also be used to check the project as a whole. But if the code grows longer, then the analysis has to be parameterized according to various factors based on the needs of the user. Also as the paper mentions, this analysis can be used in Def-Use analysis, side-effect analysis which are common forms of static analysis in s/w testing. Question 2: What are the issues with using points-to analysis with Object Oriented programs, and how does Object Sensitivity resolve these? Answer: - Object sensitive points-to analysis comes into picture with the introduction of classes and objects in OOP. Various instances have been shown in the paper like inheritance, containership, encapsulation etc which are features of OOP are not captured correctly by object insensitive analysis done for programs written in C. The slides show the specific examples how this new method tries to resolved the issues. Question 3: How can one improve the scalability of the analysis presented in the paper? Is it possible to approximate the flow from the standard libraries instead of processing the library code? Answer: - Processing the library code is inevitable, since the library code can affect objects and fields in the actual program. So one would have to analyze even library code for the analysis to have any significant impact. Having said this, scalability is not an issue according to the authors. The best known context-insensitive analysis, Andersen's technique, is as fast as the context-sensitive analysis presented in the paper. Also memory usage is similar. The reason for both techniques being similar is that whereas Andersen's analysis processes more receivers per object (since it culls all different objects together into the same reference variable), the object-sensitive analyses process more contextual versions of the same object (but roughly with inverse proportionally less receivers per version). Supposing that scalability would be a problem for very big programs, the techniques described allow for parametrization with respect to both the naming depth (i.e. how many different objects does the analysis keep track of when new objects are created inside constructors) and the set of variables for which the analysis should keep track of different contexts, as opposed to Andersen's technique where such parametrization is impossible by design.