What is the CPU/GPU of a tablet?
The CPU of a tablet is the central processing unit, and the GPU is the graphics processor. Secondly, to explain the difference between the two, we must first understand the similarities between the two: both have buses and external connections, their own cache system, and digital and logical operation units. In a word, both are designed to complete computing tasks.
First, let’s look at the diagram intuitively:
As you can see from the picture, both CPU and GPU It has its own storage (orange part, the actual storage system is more complex than the picture), control logic (yellow part) and computing unit (green part), but the difference is that the control logic of the CPU is more complex, while the computing unit of the GPU Smaller but numerous, GPUs can also provide more registers and programmer-controllable multi-level storage resources.
The difference between the two lies in the structural differences in the cache system and digital logic operation unit that exist on the chip: although the CPU has multiple cores, the total number does not exceed two digits, and each core has a large enough cache. And enough digital and logical operation units, and auxiliary hardware with a lot of accelerated branch judgments and even more complex logical judgments; the number of cores of the GPU far exceeds that of the CPU, and is called many cores (NVIDIA Fermi has 512 cores). The cache size of each core is relatively small, and the digital logic operation units are few and simple (GPU has always been weaker than CPU in floating point calculations initially). As a result, CPUs are good at processing computing tasks with complex computing steps and complex data dependencies, such as distributed computing, data compression, artificial intelligence, physical simulation, and many other computing tasks.
Due to historical reasons, GPU was created for video games (so far its main driving force is still the growing video game market). In three-dimensional games, A common type of operation is to perform the same operation on massive data, such as performing the same coordinate transformation on each vertex and calculating the color value according to the same lighting model. The many-core architecture of the GPU is very suitable for processing the same. The instruction stream is sent to many cores in parallel and executed using different input data. Around 2003-2004, experts in fields other than graphics began to notice the unique computing capabilities of GPUs and began to try to use GPUs for general computing ( That is, GPGPU). After that, NVIDIA released CUDA, and companies such as AMD and Apple also released OpenCL. GPUs began to be widely used in the field of general computing, including: numerical analysis, massive data processing (sorting, Map-Reduce, etc.), financial analysis, etc. Etc.
In short, when programmers write programs for the CPU, they tend to use complex logical structures to optimize algorithms to reduce the running time of computing tasks, that is, when programmers write for the GPU. When programming, it uses its advantage of processing massive data to cover up Lantency by increasing the total data throughput (Throughput). At present, the difference between CPU and GPU is gradually narrowing, because GPU also handles irregular tasks and inter-thread communication. Great progress has been made. In addition, the power consumption problem is more serious for GPU than CPU.