Saturday, February 21, 2015

Today we continue our discussion on WRL research report titled Swift Java compiler.To handle stores correctly, there are a number of other modifications to perform.The return value at the end of the method takes two values, the actual return value and the latest global store. This ensures that all the stores are correctly kept alive especially during dead code elimination passes. Also a throw value takes a value to throw and a global store as input. A method call takes a global store as input and produces a new store as output.
To synchronize an object, monitor_enter and monitor_exit operations were used. Similarly reads of volatile fields can be restrained by requiring that they produce a new store as output. In fact, the only time where the original store is reused and a new store is not produced is the set of operations that store into an object or an array example put_field, put_static and arr_store.This was done mainly because there is a control dependence on the immediately preceding control operation that determines whether it is executed such as an 'if' operation. Instead of representing these control dependences , a choice was made to instead pin these operations to their original block.
When describing a full method, there may be several edges and nodes to mention. Instead if we take an example where all the values are labeled and the inputs to the values are listed as named arguments.The exception path is not shown instead a block is shaded if it has an exception path. This is a convenience for readability.

Thursday, February 19, 2015

Today we continue our discussion on WRL research report titled Swift Java compiler. We saw the use of blocks with SSA graphs. We now look at the Swift type system. The type  of each value of a graph is computed as the SSA graph is built from the byte code. However, it is not always possible to recover the exact types in a Java program. Instead it is always possible to assign a consistent set of types to the values such that the effects of the method represented by SSA graph is the same as that of the original method. For some operation, the actual value types further specifies the operation. For each type T, Swift can keep track of these additional properties:
1) the value that is known  to be an object of exactly class T
2) the value is an array with a  particular constant size
3) the value is known to be non-null
By incorporating these types into the type-systems. we can therefore describe the properties of any value in the SSA graph by its type. Moreover, there can now be properties for different levels of recursive types, such as arrays.
We now review representing and ordering memory operations. Local variables are strongly scoped within the current method. Java has strong typing and no pointers, hence these variables cannot be altered by another method in the same thread or by another thread.  The SSA form comes useful to represent these.Values in the SSA graph can be considered to be temporary variables stored in a large set of virtual registers. It is only near the end of the compilation process that the register allocator chooses which values are stored in actual registers and which are spilled to the stack.
On the other hand, reads or writes of global variables or locations allocated from the heap must be represented explicitly as memory operations, since there may be many hidden dependencies among these operations.
Regardless of local or global, the compiler may have to ensure that the original ordering among memory operations is maintained even thought they may not be connected on the SSA graph.  As an example a store followed by a load may not have any scheduling dependence if the store does not produce a result that the load reads, however their ordering may still need to be preserved. In such cases, the ordering is enforced by Swift IR by having the store actually produce a result. This is called a global store and it represents the current contents of global memory. Since the store operations  modify the contents, it executes a copy on write.Additionally, each load operation takes the global store as an extra input. These data dependences between memory operations allow many optimizations to immediately generalize to memory operations. The exception is that the SSA store does not include the required dependences called anti-dependences between load and store operations. The Swift compiler enforces these dependences via explicit checks during scheduling.

#codingexercise
Double GetAlternateEvenNumberRangeSumSquareRootSquare (Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeSumSquareRootSquare();
}

We continue to look at the improvements introduced by the Swift compiler. The explicit checks that the Swift compiler adds are for example that the load operation which takes a global store S as input is not moved down past a store operation that modifies. While the constraints are added, it is done to avoid the anti-dependences. The anti-dependences would require a new kind of edge in the SSA graph and avoiding them reduces the number of edges in the SSA graph.
If the current global store is different on two incoming edges of a block, then a merge block called the  phi node is inserted into the graph to merge the two global stores into a  new current store. Similarly the phi node of the global store when inserted at the top of a loop ensures that memory operations do not incorrectly move out of the loop. The addition of phi nodes to the SSA graph increases the number of nodes in the graph but they are not as drastic as the increase in the number of edges discussed earlier. Moreover, the benefit of adding a phi node is that memory operations are treated like all other values in the SSA graph.

Sunday, February 15, 2015

Today we read from the WRL research report - the SWIFT java compiler.
 Swift is an optimizing java compiler for the Alpha architecture. Swift translates java bytecodes to optimized Alpha code, and uses static single assignments form for its intermediate representation.  The IR allows for all standard scalar optimizations. The Swift compiler also implements method resolution and inlining, interprocedural alias analysis, elimination of Java runtime checks, object inlining, stack allocation of objects and synchronization removal. Swift is written completely in Java and installs its generated code into a high performance JVM that provides all of the necessary run-time facilities.
In this paper the use of SSA form which is a part of the SWIFT design was beneficial to many optimizations and there were hardly any that were difficult to implement in SSA. The paper also discusses properties of the intermediate representation, some optimization passes and several overall performance results. The Swift design is quite effective in allowing efficient and general implementations of numerous interesting optimizations.
Java is known for its object oriented programming and type-safe design. In addition, it has automatic memory management. Commercial Java systems do Just in Time compilation which can only do limited optimizations but there are more expensive optimizations that can improve the relative efficiency of code generated from Java. Java presents many challenges for a compiler attempting to generate efficient code because runtime checks and heap allocations introduce considerable overhead. Many routines in the standard Java library include synchronization operation but these operations are unnecessary if the associated object is accessed by a single thread. Virtual method calls are quite expensive if they cannot be translated to a direct procedural call. Moreover, they need to be resolved and there can be exceptions at runtime. The possibility of these exceptions further constrain many optimizations.
#codingexercise
Double GetAlternateEvenNumberRangeSumCubeRootSquare (Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeSumCubeRootSquare();
}

We continue our discussion on the SWIF  Java compiler. We were discussing the challenges for a java compiler but Java also provides type safety. This simplifies much of the analysis for example local variables can only be assigned to, the field of an object cannot be modified by an assignment to a different field. etc. The Static Single Assignment graphs also makes most or all of the dependencies in a method explicit. Swift makes use of SSA and is written in Java. It installs into a high performance JVM that provides most of the necessary run time facilities.

First we look at some of the intermediate representation. A method is represented by a static single assignment graph, embedded in a control flow graph. The SSA graph consists of nodes called values that represent individual operations. Each value has several inputs from previous operations and a single output. That's why the node can be identified by this output. In addition, each node indicates the kind of operation that the value represents as well as any extra information required for the result. There are about 55 basic operations which include the usual arithmetic, bitwise logical and comparision operators etc. They also include operation that manipulate program flow, merging values, accessing the contents of arrays and objects, and allocating new objects and arrays from the heap. Some object oriented computations include instance of computations, virtual method calls and interface calls and java specific such as lcmp and fcmpl bytecodes.

The Swift IR breaks out the various runtime checks such as null checks, bound checks, cast checks etc. These operations cause a runtime exception if the checks fail. Since these checks are distinct, other operations allow the Swift compiler to apply optimizations to the checks, such as using common sub-expression elimination on two null checks of the same array.  Swift also has a value named init_ck that is an explicit representation of the class initialization check that must precede some operations.

We will continue our discussion on this WRL research report. Swift will eliminate redundant initialization checks during optimization. In addition, Swift's underlying JVM replaces an initialization check by a NOP after the first time that it is encountered.The proposal of static single assignments enables more checks than would have been previously possible.

#codingexercise
Double GetAlternateEvenNumberRangeCubeRootSumCube (Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeCubeRootSumCube();
}

Today we continue our discussion on this WRL research report. The Swift IR has about a 100 machine dependent operations which can be mapped to Alpha instruction set. When the swift makes two passes, it first converts the machine independent instructions into machine dependent in the first pass and in the second pass does instruction scheduling, register allocation and code generation to operate directly on the SSA graph.

The SSA graph represents the use-def chains for all variables in a method, since each value explicitly specifies the values used in computing its result. When building the SSA graph, the def-use chains are formed as well and kept up to date. Thus the graph knows that the optimization can at any stage directly access the users of a particular value.

Each method is an SSA graph embedded within a control flow graph. Each value is located in a specific basic block of the CFG, although various optimizations may move values among blocks or even change the CFG. Each method's CFG has a single entry block, a single normal exit block, and a single exceptional exit. Parameter information is in the entry block and the return value is in the exit block All blocks end whenever an operation is reached that has two or more control exits. The use of extended basic blocks although common in some Java compilers is discouraged because we don't want special processing for these blocks and moreover they can become quite complex.

#codingexercise
Double GetAlternateOddNumberRangeCubeRootSumCube (Double [] A)
{
if (A == null) return 0;
Return A.AlternateOddNumberRangeCubeRootSumCube();
}

Saturday, February 14, 2015

Today we continue reading from the WRL research report. This time we review the benchmark for opening and closing a particular file. The tests were performed with flat file and nested path file.  The UNIX derivatives were consistently faster than Sprite. The results also showed that in the remote case Sprite is also faster than SunOS, but slower than Ultrix. In any case, this benchmark did not explain the previously discussed anomaly. Another benchmark that was studied involved create-delete  of a temporary file. Data is written to the file on create and read from the file prior to delete. Three different sizes of data were transferred, 0, 10kbytes and 100Kbytes. This benchmarks highlighted a basic difference between Sprite and UNIX derivatives In Sprite, short lived files can be created, used and deleted without any data ever being written to disk. Information only goes to the disk after it has lived at least 30 seconds. In Unix and its derivatives,  the file system appears to be much more  closely tied to the disk.  Even with no data written in the file, the UNIX derivatives create and delete the file.  This benchmark also explains the anomaly involving the poor performance of DS3100 Ultrix on the Andrew Benchmark. The time for creating an empty file is 60% greater in DS3100-Ultrix-local than in 8800-Ultrix-local, and the time for a 100K byte file in DS 3100-Ultrix-Remote is 45 times as long as for DS3100-Sprite-Remote. This relative poor performance may be attributed to slower disks possibly due to NFS writing policy, which requires new data to be written through to the disk when the file is closed.  Note that the DS3100-Ultrix-Remote achieves a write bandwidth of only about 30Kbytes/sec.
Lastly SunOS 4.0 showed a surprising behavior. The time for  a file with no data is 66ms but the time for a 10Kbytes is 830 ms. It seems that when the file size jumps from 8 to 9 kbytes, it jumps to 800-ms range.
This concludes our discussion on this WRL Research report.
#codingexercise
Double GetAlternateEvenNumberRangeSumSqrtCube (Double [] A)
{
if (A == null) return 0;
Return A.AlternateEvenNumberRangeSumSqrtCube();
}

Friday, February 13, 2015

We continue our study of the WRL research report on the evaluation of the slowness of Operating systems as compared to the improvements in the hardware. We were evaluating one benchmark after another on a set of chosen RISC and CISC machines and their corresponding OS flavors. The last benchmark we were talking about was the read from File Cache. This benchmark consists of a program that opens a large file and reads the file repeatedly in 16kbyte blocks.The file size was chosen such that kernel copied the data from the file cache to a buffer in the program's address space.  It would be large enough that the data copied over was not resident in any hardware cache. Thus it measured the cost of each copy and entering the kernel. The receiving buffer was the same buffer and it was likely to stay in cache.The overall bandwidth of the data transfer was measured across a large number of kernel calls. The results indicated that the memory bandwidth was the same as in the previous benchmark involving the block copy. The M2000 RISC/os 4.0 and the 8800 Ultrix 3.0 had the highest throughput and the Sun4 and Sun3 had the lowest. Even though the latter had faster processors, it didn't seem affect any increase in memory bandwidth.  This was clearly visible by computing a metric as the uncached throughput and dividing by the MIPS rating . The memory intensive applications do not scale on these machines. The memory copying performance actually strictly drops with faster processors, both for RISC and CISC machines. The only other observation from the results that was significantly different from the previous file operation benchmark was that the Sun4 does relatively better due to its write back cache.
The next benchmark is modified Andrew Benchmark. This benchmark involves copying a directory hierarchy containing the source code for a program, stating every file in a new hierarchy, reading the contents of every copied file, and finally compiling the code in the copied hierarchy. In this case the program was compiled with the same compiler for a machine called SPUR and then run on target systems.The benchmark runs in two different phases - the copy phase which comprises of the copying and scanning operations and the other phase is the compilation phase. Since this tests the filesystem, it  matters whether the disk is local or remote. Hence the test was run in both configurations. In each of the remote cases, the server was the same kind of machine as the client. The file system protocol for remote file access was NFS.
The results from the benchmark runs showed several notable trends. First, no operating system scaled to match the hardware speedups. Second, Sprite comes out consistently faster than Ultrix or SunOS for remote access. NFS based RISC workstations slowed down 50% relative to local access.  Sometimes remote access was faster for some machines than local although this could not be explained. The DS3100-Ultrix combination was also somewhat slower than would have been expected.
#codingexercise

Double GetAlternateOddNumberRangeSumsqrtCube (Double [] A)

{

if (A == null) return 0;

Return A.AlternateOddNumberRangeSumSqrtCube();

}

Thursday, February 12, 2015

Today we continue to read from WRL Research report.
The fourth benchmark uses the bcopy (block copy) procedure to transfer large blocks of data from one area of memory to another. This doesn't necessarily involve the operating system but each may have their own implementation of the procedure. Generally they differ on cache organization and memory bandwidth and hence this is a good metric to evaluate the performance. The tests were run on configuraitons with two different block sizes. In the first case, the blocks were large enough and aligned properly to use bcopy in the most efficient way but small enough that both the source and the destination fit in the cache. In the second case, the transfer size was bigger than the cache size, so cache misses would occur continuously. In each case, several transfers were made between the same source and destination, and the average bandwidth of copying was measured.
The results showed that the cached bandwidth was largest for M2000 RISC/os and the 8800 Ultrix an d it progressively reduced for the Sun4 and Sun3 machines.  The uncached bandwidth also shows the same gradation on the bandwidth.This implies that even with faster processors had no role in improving memory bandwidth.  Thus memory intensive apploications are not likely to scale on these machines. In fact, the relative performance of memory copying drops with faster processors across both RISC and CISC machines.
The next benchmark we consider is the read from file cache.  the benchmark consists of a program that opens up  a large file and reads the file repeatedly in 16 kbyte blocks.For each configuration, a file size was chosen such that it would fit in the main memory file cache.Thus the benchmark measures the cost of entering the kernel and copying the data from the kernel's file cache back to a buffer in the benchmark's address space.

We will continue with this post shortly.

Wednesday, February 11, 2015

Today we continue to read from WRL research report.We were reading the report on why operating systems werent getting fast with improvements in hardware. We were comparing M2000, DS3100, Sun4, 8800, Sun3, MVAX2 etc.The operating systems were Ultrix, SunOS, RISC/os, and Sprite.
The first benchmark measures the cost of entering and exiting the kernel. By making repeated getpid calls, it was found that the Sun3 and MVAX2 ultrix took the most time. M2000 RISC/os and DS3100 flavors took the least time. There was also a metric called the 'MIPS Relative speed'. What this metric indicates is how well the machine performed on the benchmark, relative to its MIPS rating and to the Sun3 time for kernel entry-exit. A rating of 1.0 on this scale would mean that the machine ran as expected A rating less than 1.0 means that the machine ran the benchmark slower than expected. and viceversa. The RISC machines turned out to have this rating lower than 1 while the others were close to 1.
The second benchmark is called cswitch. It measures the cost of context switching, plus the time for processing small pipe reads and writes. The benchmark operates by forking a child process and then passing one byte back and forth between parent and child using pipes. Each roundtrip costs two context switches and one kernel read and one kernel write operations. Here too the RISC machines did not perform well except for the DS3100/Ultrix combination.
The third benchmark exercises the select kernel call. It creates a number of pipes, places data in some of those pipes, and then repeatedly calls select to determine how many of the pipes are readable. A zero timeout is used in each select call so that the kernel call never waits. Three configurations were used. The first one was  a single pipe with no data. The second one involved ten pipes all empty and the third one used ten pipes all full with data.  Again the RISCs did not perform well. The RISC/os's emulation of the select kernel call seemed faulty causing a timeout of 10ms even if the calling program specified immediate timeout.