55%
18%
60%
In the Linux kernel, the following vulnerability has been resolved: md/raid5: fix deadlock that raid5d() wait for itself to clear MD_SB_CHANGE_PENDING Xiao reported that lvm2 test lvconvert-raid-takeover.sh can hang with small possibility, the root cause is exactly the same as commit bed9e27baf52 ("Revert "md/raid5: Wait for MD_SB_CHANGE_PENDING in raid5d"") However, Dan reported another hang after that, and junxiao investigated the problem and found out that this is caused by plugged bio can't issue from raid5d(). Current implementation in raid5d() has a weird dependence: 1) md_check_recovery() from raid5d() must hold 'reconfig_mutex' to clear MD_SB_CHANGE_PENDING; 2) raid5d() handles IO in a deadloop, until all IO are issued; 3) IO from raid5d() must wait for MD_SB_CHANGE_PENDING to be cleared; This behaviour is introduce before v2.6, and for consequence, if other context hold 'reconfig_mutex', and md_check_recovery() can't update super_block, then raid5d() will waste one cpu 100% by the deadloop, until 'reconfig_mutex' is released. Refer to the implementation from raid1 and raid10, fix this problem by skipping issue IO if MD_SB_CHANGE_PENDING is still set after md_check_recovery(), daemon thread will be woken up when 'reconfig_mutex' is released. Meanwhile, the hang problem will be fixed as well.
CVSS 3.1 Base Score 5.5. CVSS Attack Vector: local. CVSS Attack Complexity: low. CVSS Vector: (CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H).
In the following Java snippet, methods are defined to get and set a long field in an instance of a class that is shared across multiple threads. Because operations on double and long are nonatomic in Java, concurrent access may cause unexpected behavior. Thus, all operations on long and double fields should be synchronized.
}return someLongValue;someLongValue = l;
This code tries to obtain a lock for a file, then writes to it.
fclose($logFile);}//attempt to get logfile lockflock($logfile, LOCK_UN);// unlock logfileprint "Could not obtain lock on logFile.log, message not recorded\n";
PHP by default will wait indefinitely until a file lock is released. If an attacker is able to obtain the file lock, this code will pause execution, possibly leading to denial of service for other users. Note that in this case, if an attacker can perform an flock() on the file, they may already have privileges to destroy the log file. However, this still impacts the execution of other programs that depend on flock().
The following function attempts to acquire a lock in order to perform operations on a shared resource.
}
pthread_mutex_unlock(mutex);/* access shared resource */
However, the code does not check the value returned by pthread_mutex_lock() for errors. If pthread_mutex_lock() cannot acquire the mutex for any reason the function may introduce a race condition into the program and result in undefined behavior.
In order to avoid data races correctly written programs must check the result of thread synchronization functions and appropriately handle all errors, either by attempting to recover from them or reporting it to higher levels.
}
return pthread_mutex_unlock(mutex);return result;/* access shared resource */
It may seem that the following bit of code achieves thread safety while avoiding unnecessary synchronization...
return helper;
}}helper = new Helper();
The programmer wants to guarantee that only one Helper() object is ever allocated, but does not want to pay the cost of synchronization every time this code is called.
Suppose that helper is not initialized. Then, thread A sees that helper==null and enters the synchronized block and begins to execute:
helper = new Helper();
If a second thread, thread B, takes over in the middle of this call and helper has not finished running the constructor, then thread B may make calls on helper while its fields hold incorrect values.
ExploitPedia is constantly evolving. Sign up to receive a notification when we release additional functionality.
If you'd like to report a bug or have any suggestions for improvements then please do get in touch with us using this form. We will get back to you as soon as we can.