May/June 2017 issue of Java Magazine, “Quiz Yourself” (1z0-808, 1z0-809)

Written by Mushfiq Mammadov

In May/June 2017 issue of Java Magazine have been published four questions regarding 1Z0-808 (OCA Java SE 8) and 1Z0-809 (OCP Java SE 8) exams:


Question 1 (intermediate). Given this fragment:

What is the result?

a. Compilation fails at line n1.
b. Compilation fails at line n2.
c. An exception is thrown at line n2.
d. name is Fred.
e. name is null.


Question 2 (intermediate). Given this code:

Which three of the following are true?

a.  Inserting {int x = 100;} at line n1 results in a compilation error.
b.  Inserting int x = 100; at line n1 results in a compilation error.
c.  Inserting {int x = 100;} at line n2 results in a compilation error.
d.  Inserting int x = 100; at line n2 results in a compilation error.
e.  Inserting int x = 100; at line n3 results in a compilation error.


Question 3 (advanced). Given a directory hierarchy such that the root directory / contains a subdirectory a/, that subdirectory a/ contains a subdirectory x/, and that subdirectory x/ contains a subdirectory y/, and also given that a file a.txt is in subdirectory x/ and a file b.txt is in subdirectory y/, like this:

|__ a/
       |__ x/
              |__ a.txt
              |__ y/
                     |__ b.txt

Suppose that all the files and directories are plain and regular in nature and fully accessible by the executing code, that the following setup code runs with a current working directory of a/, and before any other code:

  • Path dir = Paths.get("./x").toAbsolutePath();

Which of the following fragments produce the following output? (All compile and run normally.)


a. Files

b. Files

c. Files

d. Files
       .find(dir, 1, (x, y) -> Files.isDirectory(x))

e. Files
       .find(dir.normalize(), 1, (x, y) -> !Files.isDirectory(x))


Question 4 (advanced). Given these descriptions:

  1. Access by a single-threaded program
  2. Access by a multithreaded program
  3. High frequency of reading
  4. High frequency of writing
  5. Low frequency of reading
  6. Low frequency of writing

Which combination would make it appropriate to use a CopyOnWriteArrayList?

a. 1 and 4 in the same program
b. 1 and 6 in the same program
c. 2 and 4 in the same program
d. 2, 3, and 6 in the same program
e. 2, 4, and 5 in the same program


  1.  C
  2.  B, C, D
  3.  E
  4.  D


Question 1

The correct answer is option C. This question probes the nature of Java’s multidimensional arrays. In fact, it’s often said that Java does not have multidimensional arrays, but that it allows arrays of anything, including arrays of arrays. While this position is debatable (the language specification both asserts it and contradicts it) and it is certainly a ine distinction, it embodies an important and useful truth. Therefore, in the declaration of x in line n1, x is not an array, and even more, x is not a two-dimensional array. Rather, x is, as always, a reference. The variable x can refer to a simple, single-dimensional array, but the elements of that array must themselves be arrays (or be null). Those secondary arrays must in turn be arrays of references to String, or they must be null.

So far, so good. What does x refer to in this case? The initialization expression String[1][] is interesting. It instantiates a single array with one element. The element type of that array is “array of Strings,” but because the second square-bracket pair is empty, no secondary array is created. Notice that this syntax is legal (and useful), so the compilation failure proposed in option A is false.

Whenever Java allocates heap memory for an object (and arrays are objects), the memory is zeroed before any further initialization (such as invoking a constructor) occurs. This means that the single element of the array that is created contains a null pointer. That single array element is x[0], and given that no other assignment is made to it in the code, its value is null. The code x[0][0] is syntactically legitimate, so option B is false. It would be interpreted as follows: “Follow the reference in the variable x to an array. Take the first element of that array, and follow that reference to another array. Take the first element of that array and use it as a reference to a String.” Of course, in this case, x[0] is a null pointer, so the attempt to ind the subarray throws a NullPointerException, which means that option C is true.

Options D and E are both false, because the code never prints anything; it crashes with the NullPointerException before that point. In fact, if line n2 did not exist, the same NullPointerException would occur at the output line, because the print expression also attempts to dereference the null pointer that is x[0].

Question 2

The correct answers are options B, C, and D. In Java, variables are block scoped. Generally, that means that a variable is visible from the point of its declaration to the end of the block that encloses the declaration. In this case, that block is the following:

  • {
    // general code, x not in scope because
    // it's not yet declared
    int x = 99;
    // general code, x in scope
    } // scope of x ends here

On this basis, option A does not cause a compilation error, because the declaration of int x that it contains is entirely local to the block. Hence, option A is incorrect.

However, in option B, the variable introduced has a scope that extends throughout the for loop, the block associated with that loop, and all the way to the closing curly brace following line n3. As a result, the variable declared in the for loop becomes a duplicate variable x and the code would not compile. Because of this, option B is a correct answer.

A variation on the simple description of scope above applies to for loops, formal parameters of methods, try with-resources, and catch blocks. These structures have broadly similar forms with variable declarations enclosed in parentheses and with a block immediately following the closing parenthesis. In these situations, the scope of the variable begins with its declaration, but the scope ends with the closing brace of the following block. If a for loop has a single subordinate statement, rather than a block, the scope ends at the end of that statement. It’s probably a very bad idea stylistically to leave out the braces, even when only a single statement is controlled by the loop. Therefore, the preferred style is the following:

  • for (int x = 0; x < 10; x++) {
    // x in scope
    } // scope of x ends here

In particular, notice that although a variable declaration does not escape the block that contains its scope, it does penetrate inside any nested blocks. In this case, any attempt to define a new variable x inside the for loop (whether surrounded by a block of its own or not) will fail, because the x declared in the for loop’s control structure results in the new declaration being a duplicate. Because of this, options C and D both result in compilation errors and they are, therefore, correct answers.

Because the declaration of int x in the for loop has a scope that ends with the end of the block that is subordinate to that loop, there’s no variable x in scope at line 3. As a result, adding the declaration in option E does not cause any problems, and option E is incorrect.

A side note on exam questions: as a rule, questions try to avoid using negatives, because they’re easy to miss. In this case, notice that the question asks a positive question, but the options refer to “result in a compilation error.” This might be unexpected, but be sure to read the question that’s actually in front of you and try to avoid letting your brain make assumptions. Programmers know that close attention to detail is critical in this line of work, so be sure to use that skill when answering questions, too.

Question 3

The correct answer is option E. This is a question that demands a certain knowledge of Java’s APIs. There aren’t many questions of this kind, because there’s an argument that this kind of information can readily be looked up and need not be learned. On the other hand, it’s not a bad idea to have a broad knowledge of the kinds of features available in the APIs, because it’s common to see handwritten code that duplicates (and commonly does so with errors) capabilities that are provided in a core API. After all, if you don’t even know the capability exists, you’re not very likely to look up the details of how to use it. In a learning situation, such as reading this article, it’s often interesting to discover what features are available that might be unfamiliar.

In this question, you’re told that all the code compiles and runs, so from that you know that there must be ive static methods in a class called Files. By the way, this class full of utilities was introduced with Java 7, so it’s actually new enough that many programmers haven’t found it yet. Files ofers many useful methods for ile manipulation, reading, and writing, and if this class is new to you, it’s worth a look if you ever have to manipulate iles. The methods used in this question are list, walk, find, isDirectory, and isRegularFile.

The methods isDirectory and isRegularFile behave as their names suggest. They take a Path object as an argument and return a Boolean value indicating whether Path describes a directory or a regular file (that is, a ile that can hold data). They both actually have a second argument that indicates how to handle links. The methods use varargs, so the second arguments are optional—which is why it doesn’t show up in these examples.

The method Files.list creates a stream of Path objects that enumerate the contents of the argument directory. The Path class, as can reasonably be inferred from the given source code, represents a ile or directory name, possibly including path information. It’s also reasonable to infer that the toString method of a Path returns a reasonable textual representation. If this weren’t the case, none of the options could create the output required. However, the Files.list method enumerates the entire contents of the directory that it examines, which means that in this case, it refers not only to the a.txt ile but also to the y directory. For this reason, option A is incorrect.

Another point about the Path class is that it can represent either relative or absolute paths—for example, ./x/a.txt or /a/x/a.txt, respectively. In this case, the preamble code forces the Path object into an absolute-path mode, but the Path referred to by dir is actually /a/./x/, and the dot stays in the output. This, too, means that option A must be incorrect. To remove this excess dot, you can invoke the normalize method on the Path object. This results in a Path that has had references to . and .. cleaned up without changing the target of the Path object. This fact allows you to reject options B and D for the same reasons.

Next, consider the Files.walk method. This method creates a Stream that enumerates all the items in the starting directory and subdirectories. However, because this descends into subdirectories, the stream created in option B would initially include directories x and y and files a.txt and b.txt. A filter is applied to this stream that will allow only directories to pass, and this means that the output will be /a/./x and /a/./x/y. This means that the wrong items are shown, and the formatting has an excess dot. Therefore, option B is incorrect.

Option C also uses the walk method. It starts by calling normalize on the starting directory, so the format will be correct, and it filters out the directories, leaving the files. However, the output includes all the files in the tree and, therefore, will include both /a/x/a.txt and /a/x/y/b.txt. Because of this, option C is incorrect.

The third method that you must consider is the Files.find method. This is very similar to the walk method, in that it creates a Stream that represents items pulled recursively from the directory hierarchy. The difference is that the find method can exclude items from that stream. To be fair, you can remove items using a filter applied to the stream obtained from a walk operation. That’s illustrated in options B and C. However, downstream filtering is typically less efficient. In the case of a find operation, the path and file attributes are passed into the third argument of the find method (which is BiPredicate<Path, BasicFileAttributes>). These file attributes are read when the directory is first scanned. In contrast, the downstream filter—as in options B and C—requires that the information be read a second time, which is less efficient.

The BiPredicate operation must return true if a contender Path is to be included in the stream that find creates. On that basis, option D would enumerate the directories, not the files, and must, therefore, be incorrect.

The find method also has the ability to limit the depth of recursion down the directory tree. This is the purpose of the second argument (the numeric one). In this case, the value 1 allows examination of the contents of the directory that is speciied in the irst argument. The value 1 in option E is suicient to prevent the stream from including the ile b.txt. Also, because the first argument is dir.normalize(), the format of the output is correct and does not include the undesired dot. Therefore, option E is correct.

As a side note, the find method takes an optional fourth argument that allows you to specify whether the recursion should follow links or not.

Question 4

The correct answer is option D. The CopyOnWrite ArrayList class is defined in the java.util.concurrent package. Functionally, it provides an implementation of the List interface, but it is speciically designed to help you handle scalability issues.

If a system is “scalable,” this means that as you add more compute hardware to it, it becomes capable of handling more work in the same amount of time. Ideally, if you doubled the amount of hardware, you’d double the throughput. However, usually you get diminishing returns. How badly those throughput returns diminish is defined mathematically by Amdahl’s law. In simplified terms, Amdahl’s law says that the more often, or the longer, that threads have to wait for one another, the less the system is able to benefit from adding more hardware to it—that is, the less scalable it is. Modern systems are commonly expected to scale well, so it’s important to design them in a way that minimizes the time that threads have to wait for each other.

The copy-on-write structures in Java’s concurrent API address a very specific situation. If a program has a data structure that is being accessed at very high rates by multiple threads, but all of those threads are reading and never altering the data, no locking is needed, and the threads need not wait for one another. However, if any thread wants to make a change, normally no other threads can be allowed to access the data while that change is being made, and a great deal of waiting results. That waiting causes a loss of scalability.

Now suppose that a program does a lot of concurrent reading, but occasionally a thread wants to modify the data. One approach would be to have the read operations be unprotected (so no loss of scalability occurs). This means that no thread can ever be permitted to modify the data. Therefore, when a modification must be made, the thread that wants to do this starts by making a copy of the data—that’s a read operation, so it’s completely safe. Then, in the private copy, the writing thread can safely make an update. The reading threads can continue while this is going on, although they are getting “stale” data at this point. If that staleness matters (it often doesn’t), this approach is unsuitable. At the point that the change has been completed, the structure can start directing reading threads at the updated data set.

Notice that this copy operation could be hugely expensive for a large list, and on that basis, this approach is useful only if all of the following are true:

  • Many threads need concurrent read access.
  • It’s very rare for threads to modify the data.
  • You need to maintain the scalability of the system.
  • It’s OK that reading threads are seeing data that’s a little stale from time to time.

A single-threaded system is not scalable anyway, because it has no ability to use additional CPUs. Therefore, item 1 must be invalid and item 2 is a requirement. Because high read rates and low write rates are needed, you can see that items 3 and 6 are also requirements, which means that option D is the only correct option.


Source: May/June 2017 issue of Java Magazine

About the author

Mushfiq Mammadov

Leave a Comment