Dex-chan lover
- Joined
- Jun 11, 2018
- Messages
- 321
The CPU is responsible for performing all calculations. It does this by receiving commands that specify which calculations need to be carried out.
This process is more complex than it might initially seem.
Accessing the hard drive is very slow from a software perspective.
To mitigate this, we use RAM, which temporarily stores data during execution.
However, RAM is still relatively slow. Therefore, every CPU has several layers of cache memory located close to the cores. There are three layers of cache, each with increasing speed and decreasing size.
For simplicity, we can say that only RAM data can be shared between cores (this isn't entirely accurate, but it's a helpful simplification).
This means that sharing data between threads (or between cores) is not straightforward.
There are various techniques to manage this.
Most of these techniques involve copying data.
One approach is to create a queue or stack that points to the data. This involves copying the data first and then pushing the reference onto the stack. In this context, we are using a queue (not a stack), which operates on a first-in, first-out basis.
Another common technique is double buffering. This involves maintaining two copies of the same data: one designated for writing and the other for reading. While one thread writes new data, another thread reads the previously prepared data. Once the writing process is complete, the data sets are swapped. This allows the first thread to read the new data while the second thread continues writing the next set of data.
The tricky part now is to communicate properly. Which is quite challenging indeed.
In short. My strong recommendation would be to use threads only for networking or file operations which are elements of a program that take the most time with only simple data in return.
This process is more complex than it might initially seem.
Accessing the hard drive is very slow from a software perspective.
To mitigate this, we use RAM, which temporarily stores data during execution.
However, RAM is still relatively slow. Therefore, every CPU has several layers of cache memory located close to the cores. There are three layers of cache, each with increasing speed and decreasing size.
For simplicity, we can say that only RAM data can be shared between cores (this isn't entirely accurate, but it's a helpful simplification).
This means that sharing data between threads (or between cores) is not straightforward.
There are various techniques to manage this.
Most of these techniques involve copying data.
One approach is to create a queue or stack that points to the data. This involves copying the data first and then pushing the reference onto the stack. In this context, we are using a queue (not a stack), which operates on a first-in, first-out basis.
Another common technique is double buffering. This involves maintaining two copies of the same data: one designated for writing and the other for reading. While one thread writes new data, another thread reads the previously prepared data. Once the writing process is complete, the data sets are swapped. This allows the first thread to read the new data while the second thread continues writing the next set of data.
The tricky part now is to communicate properly. Which is quite challenging indeed.
In short. My strong recommendation would be to use threads only for networking or file operations which are elements of a program that take the most time with only simple data in return.