Reprinted, please indicate the original source, thank you!
First of all, you need to talk about thread safety? A security thread has been provided, for example, what is the difference between StringBuilder and StringBuffer? Often will appear on the thread safe and non thread safety, may have been no mention of his carefully, if you suddenly asked what is the concept of thread safe? You may need to pause a few seconds, not thread safe is not to provide access to data protection, there may be more than one thread has to change the data caused by the data obtained is dirty data, in fact, I think the definition of thread safety is good, Baidu, has not found a particularly good explanation, I’d choose a relatively can also explain it, thread safety is multi-threaded access, using a locking mechanism to protect a thread when a data access to the class, when other threads cannot be accessed until the thread reads, other threads can be used. There will be no data inconsistency or data contamination. I think this description is not entirely correct, because now the concurrency control strategy is not only a lot of locking mechanism can not lock, I think this might be more appropriate to explain, that multiple threads will operate to, is a public resource or the sharing of data, but only one thread for each operation use once the critical region resource consumed by other threads must wait for the release of resources, in parallel programs, critical resources are protected so that thread safety does not contain is not thread safe.
Due to the concurrent program than serial program more complex, one of the most important reasons is that concurrent programs access consistency and security will be a serious challenge, how to ensure that a thread can see the correct data? So we need to understand the parallel mechanism on the premise of defining some rules to ensure multi thread direct, effective, correct collaborative work, and the Java memory model (JMM) is to do these things.
The JMM model revolves around the atomicity, visibility, and order of multithreading. This content is too complex, their level is limited, may be understood some deviation, hope that when you help point out.
Atomicity means that an operation is not interrupted, even if threads are executed together, an operation will not interfere with other threads if it starts. The original text is:
Atomicity Accesses and updates to the memory cells corresponding to fields of any type except long or double are guaranteed to be atomic. This includes fields serving as references to other objects. Additionally, atomicity extends to volatile long and double. (Even though non-volatile longs and doubles are not guaranteed atomic, they are of course allowed to be. Atomicity guarantees ensure) that when a non-long/double field is used in an expression, you will obtain either its initial value or some value that was written by some thread, but not some jumble of bits resulting from two or more threads both trying to write values at the same time. However, as seen below, atomicity alone does not guarantee that you will get the value most rec Ently written by any thread. For this reason, atomicity guarantees per se normally have little impact on design. concurrent program
How can the long field and the double field in the 32 bit hotspot be not atomic? (as a question, the following chapters will be answered and attached to the program description). Under the 64 bit hotspot, the long fields and the double fields are all atomic. If 32 bit hotspot, volatile long and volatile double also have atomicity. Why is it necessary to add the volatile long field and the double field field to the hotspot in 32? This, related to the characteristics of volatile, when we declare shared variables to volatile, reading and writing will be very special for this variable, volatile is a good way to understand the individual variables of volatile to read and write is to use the same
lock on the individual read and write operations as synchronization, lock semantics determines the critical area code execution is atomic, so in 32 for hotspot and volatile long fields and double fields has a prototype, the follow-up will be volatile, key words difficult to understand.
In order to optimize the order, if not, to order from the order of execution performance, general code is written to us after a row of execution, but in order to improve the performance, we need to optimize, may modify the original order,
Current compilers and processors often rearrange instructions.
- Compiler optimized reordering. The compiler can rearrange the execution sequence of the sentence without changing the semantics of the single thread.
- Reordering of instruction level parallelism. Modern processors use instruction level parallelism to overlap multiple instructions. If there is no data dependency, the processor can change the execution order of the sentence corresponding to the machine instruction.
- Reordering of memory systems. Because the processor changes cache and read / write buffer, it makes it possible to load and store operations in a disorderly order. The above 1 belong to compiler reordering, 2 and 3 belong to processor reordering.
These reordering can lead to memory visibility problems in multithreaded programs. For compilers, JMM’s compiler reordering rules prohibit specific types of compiler reordering (not all compilers need to be reordered). For processor reordering, JMM processor reordering rules will require the Java compiler to generate a sequence of instructions, memory barrier instructions inserted a specific type, to prohibit certain types of the memory barrier instruction processor reordering. About memory barrier, follow-up, in-depth understanding of the chat.
Visibility is when a thread to modify a shared variable, other threads can immediately know about this value changes, visibility is a complex and comprehensive problem, there is something about the cache optimization or hardware optimization will lead to visibility problems, the above mentioned on instructions will also affect the rearrangement visibility problem.
Several concepts are also introduced:
Starting with JDK 5, Java uses a new JSR-133 memory model (unless specifically specified, this is all JSR-133 memory models). JSR-133 uses the concept of happens-before to illustrate memory visibility between operations. In JMM, if the results of an operation need to be visible to another operation, there must be a happens-before relationship between the two operations. The two operations mentioned here can be either within one thread or between different threads.
rules closely related to programmers are as follows: happens-before:
- Program sequential rules: every operation in a thread, happens-before any subsequent operations in that thread.
- Monitor lock rule: unlock a lock, and happens-before locks the lock later.
- Volatile variable rules: write to a volatile field, happens-before to any subsequent reading of this volatile domain.
- Transitivity: if A happens-before B, and B happens-before C, then A happens-before C.
Note: there is a happens-before relationship between the two operations, which does not mean that the previous operation must be performed before the next operation! The happens-before only requires the previous operation (the result of execution) to be visible to the latter operation, and the previous operation is arranged in order before the second operation. The definition of happens-before is very subtle, and a happens-before rule corresponds to one or more compilers and processor reordering rules. For Java programmers, happens-before rules is simple and easy to understand, it avoid reordering rules in order to understand the JMM Java programmer memory visibility guarantees, and to learn the complex and specific methods to achieve these rules.
If two operations access the same variable and one of these two operations is write, there is a data dependency between the two operations. Data dependencies are divided into the following 3 types:
- Read after writing, a=1; b=a; write a variable and reread the variable.
- Write after write, a=1; a=2; write a variable, and then write this variable.
- Write after reading, a=b; b=1; read a variable and then write this variable.
In the above 3 cases, the execution result of the program is changed as long as the order of the two operations is reordered. As mentioned earlier, the compiler and the processor may reorder the operations. The compiler and the processor comply with the data dependency when reordering, and the compiler and the processor do not change the execution sequence of the two operations that have data dependencies.
said here that the data dependent only on the execution of the instruction sequence and single thread execution in a single processor operation between different processors and different thread data dependence by the compiler and processor to consider.
The semantics of as-if-serial means that no matter how reordering (compilers and processors improve parallelism), the execution of a single thread program can not be changed. Compilers, runtime, and processors must adhere to the as-if-serial semantics.
in order to comply with the as-if-serial semantics, the compiler and the processor do not reorder the data dependency operations, because the reordering changes the execution results. However, if there is no data dependency between operations, these operations can be reordered by the compiler and processor.
as-if-serial meaning the single threading program is protected, the compiler, runtime and processor comply with as-if-serial semantics for the preparation of common single threaded applications programmers to create an illusion: a single threaded program is programmed to execute the order. As-if-serial semantics enables single threaded programmers to worry about reordering without having to worry about reordering, and also need not worry about memory visibility.
About final and volatile subsequent chapters are introduced, very complex and comprehensive knowledge, today, these contents are difficult to understand, and I hope to help you.
Individual public number
ingenuity zero public number.Jpg