Professional PCB manufacturing and assembly
Building 6, Zone 3, Yuekang Road,Bao'an District, Shenzhen, China
+86-13923401642Mon.-Sat.08:00-20:00
PCB manufacturing
PCB manufacturing
Pcb sharing
21Mar
Kim 0 Comments

Pcb sharing

The VxWorks operating system requires that the shared storage area is a contiguous storage address space. The default value is 16MB, which is defined in the network driver. The master device is responsible for allocating shared storage areas to other processors and for memory mapping. The location of a shared storage area depends on the system configuration. All processors using the hook feature must be able to access this area. The shared storage hook is the communication reference point for all processors. Hook structures and shared memory areas can be placed in dual-port RAM. The hook contains the physical address offset of the real storage area, which is set by the master device during initialization. The hook and the storage area must be in the same address space, and the address must be linear and valid.

 

After the shared memory network master device is initialized, all processors can use the shared memory network. However, the main processor does not really interfere with the other processors to interact with packets over the network. The communication between the processors is carried out through local interrupts or queries. When the shared memory is initialized, all processors, including the main processor, use the network equally. In the Tornado2.0 environment, the main processor number is specified as 0, and the system identifies the main and slave processors by the process number. Typically, the main processor has two Internet addresses, one for the external traffic and one for the internal gateway.

 pcb

1. Network heartbeat of the shared storage area

 

In a multi-CPU system, all processors can communicate over the network only after the shared storage area has been initialized, so each processor needs to know whether the shared network is active or ready. The heartbeat detection method is used to make each processor know the state of the network.

 

A heartbeat is a counter that is counted every second by the main processor and monitored by other processors to ensure that the shared network is healthy. Other processors typically monitor heartbeats every few seconds (depending on the situation). The shared memory heartbeat offset address is placed in the fifth four-section word of the shared memory packet header.


 

2. Communication between processors

 

The communication between processors can adopt the interrupt mode or the query mode. Each processor has an input queue to receive packets from other processors. When using query mode, the processor checks whether the queue has received data at regular intervals. When interrupt mode is used, the sending processor notifies the receiving processor that there is data in the input queue. The interrupt mode can be bus interrupt or mailbox interrupt, which is more efficient than query mode.

 

The parallel system with multiple cpus is similar to an embedded distributed system, and the communication between them can adopt distributed message queue and fractional database technology. The combination of distributed message queue and distributed database technology provides a transparent communication platform for all processors in the system. The processor accesses the distributed message queue as if it were its own resources. Distributed message queue technology can simplify application design and speed up system development.

 

3. Resource allocation among multiple processors

 

In single-board computer systems with multiple processors, the most important point is to consider the efficiency of parallel execution of tasks. Multiple processors all need to access peripheral devices and communicate data, so there is the problem of allocation of external devices.

 

The allocation of equipment resources, there are two kinds: one is customized (namely static allocation), that is, the board computer in the design will be good allocation of resources, the disadvantage is not strong adaptability, resources can not be changed according to the needs of users; The second is dynamic allocation, loading FPGA logic on the board, reserving software interface, users can dynamically specify according to the requirements of the task. The entire resource control is transparent and you don't need to know which CPU controls it. In hardware design, the arbitration and priority setting of CPU and external device access should be considered to prevent conflicts caused by access to critical resources. The software should specify which CPU is using a particular device, and the remaining cpus should be mutually exclusive.

 

Performance of multiprocessor parallel computers

 

In this system, the CPU type is Intel Pentium3 processor, the main frequency is 700MHz. Test method, using the same function of the data processing algorithm, it is divided into modules, respectively running in each processor of the system. The analysis of test results shows that compared with single CPU, the computing performance can be improved by 60%~70% by using two cpus. With three cpus, the computing performance is at least twice. As we know, the biggest factor affecting this test result is the test method, which decomposes the algorithm of the same function into multiple processors, and the decomposition method directly determines the comprehensive processing efficiency. However, it is certain that the parallel processing design of multiple processors can greatly improve the computing efficiency of the system.

Just upload Gerber files, BOM files and design files, and the KINGFORD team will provide a complete quotation within 24h.