AFDX Network. Ethernet Frame Format. Full-Duplex, Switched Ethernet Example. End Systems and Avionics Subsystems Example.
|Published (Last):||19 January 2007|
|PDF File Size:||7.13 Mb|
|ePub File Size:||12.8 Mb|
|Price:||Free* [*Free Regsitration Required]|
AFDX Network. Ethernet Frame Format. Full-Duplex, Switched Ethernet Example. End Systems and Avionics Subsystems Example. Sampling Port at Receiver. Queuing Port at Receiver. Packet Routing Example.
Message Sent to Port 1 by the Avionics Subsystem. A and B Networks. Receive Processing of Ethernet Frames.
Virtual Link Scheduling. Role of Virtual Link Regulation. Two Message Structures. Ethernet Source Address Format. IP Header Format. IP Unicast Address Format. IP Multicast Address Format. UDP Header Format. Since its entry into commercial airplane service on the Airbus A in , the all-electronic fly-by-wire system has gained such popularity that it is becoming the only control system used on new airliners.
But there are a host of other systems — inertial platforms, communication systems, and the like — on aircraft, that demand high-reliability, high-speed communications, as well. Control systems and avionics in particular, rely on having complete and up-to-date data delivered from source to receiver in a timely fashion.
For safety-critical systems, reliable real-time communications links are essential. That is where AFDX comes in. AFDX brings a number of improvements such as higher-speed data transfer — and with regard to the host airframe — significantly less wiring, thereby reducing wire runs and the attendant weight.
What is AFDX? One of the reasons that AFDX is such an attractive technology is that it is based upon Ethernet, a mature technology that has been continually enhanced, ever since its inception in Avionics Subsystem: The traditional Avionics Subsystems on board an aircraft, such as the flight control computer, global positioning system, tire pressure monitoring system, etc.
Each Avionics Subsystem the End System interface to guarantee a secure and reliable data interchange with other Avionics Subsystems. This interface exports an application program interface API to the various Avionics Subsystems, enabling them to communicate with each other through a simple message interface.
It generally consists of a network of switches that forward Ethernet frames to their appropriate destinations. It, in turn, provides a communications path between the Avionics Subsystems and the external IP network and, typically, is used for data loading and logging. The following sections provide an overview of the AFDX architecture and protocol.
But first we briefly review two of the traditional avionics communications protocols. The Avionics subsystems attach to the bus through an interface called a remote terminal RT. The Tx and Rx activity of the bus is managed by a bus controller, that acts to ensure that no two devices ever transmit simultaneously on the bus. The communication is half duplex and asynchronous. Figure 2. Messages consist of bit words with a format that includes five primary fields. The Label field determines the interpretation of the fields in the remainder of the word, including the method of translation.
There was no centralized control among the stations; thus, the potential for collisions simultaneous transmission by two or more stations existed.
Ether Figure 5. If you have a message to send and the medium is idle, send the message. If the message collides with another transmission, try sending the message later using a suitable back-off strategy.
Issues Station Station? No central coordination. Collisions lead to non-deterministic behavior. Station Figure 4. If you have a message to send, send the message, and 2. If the message collides with another transmission, try resending the message later using a back-off strategy. Typically, cables are point-to-point, with hosts directly connected to a switch. In the case of transmission, each 4-bit nibble of data is encoded by 5 bits prior to transmission.
Some of the 5-bit patterns are used to represent control codes. Ethernet is similar to the ALOHA protocol in the sense that there is no centralized control and transmissions from different stations hosts could collide.
Carrier Sense means that the hosts can detect whether the medium coaxial cable is idle or busy. Multiple Access means that multiple hosts can be connected to the common medium.
Collision Detection means that, when a host transmits, it can detect whether its transmission has collided with the transmission of another host or hosts. The original Ethernet data rate was 2. The Ethernet frame begins with the Ethernet header, which consists of a 6-byte destination address, followed by a 6-byte source address, and a type field.
The Ethernet payload follows the header. The length of an Ethernet frame can vary from a minimum of 64 bytes to a maximum of bytes. Ethernet communication at the link level is connectionless.
Acknowledgments must be handled at higher levels in the protocol stack. As we explained, there is an issue when multiple hosts are connected to the same communication medium as is the case with coaxial cable, depicted in Figure 5, and there is no central coordination.
When a collision occurs two or more hosts attempting to transmit at the same time , each host has to retransmit its data. Clearly, there is a possibility that they will retransmit at the same time, and their transmissions will again collide. To avoid this phenomenon, each host selects a random transmission time from an interval for retransmitting the data. If a collision is again detected, the hosts selects another random time for transmission from an interval that is twice the size of the previous one, and so on.
This is often referred to as the binary exponential backoff strategy. Since there is no central control in Ethernet and in spite of the random elements in the binary exponential backoff strategy, it is theoretically possible for the packets to repeatedly collide.
What this means is that in trying to transmit a single packet, there is a chance that you could have an infinite chain of collisions, and the packet would never be successfully transmitted. Therefore, in half-duplex mode it is possible for there to be very large transmission delays due to collisions.
This situation is unacceptable in an avionics data network. So, what was required and what was implemented in AFDX was an architecture in which the maximum amount of time it would take any one packet to reach its destination is known. That meant ridding the system of contention. Doing Away with Contention To do away with contention collisions , and hence the indeterminacy regarding how long a packet takes to travel from sender to receiver, it is necessary to move to Full-duplex Switched Ethernet.
Full-duplex Switched Ethernet eliminates the possibility of transmission collisions like the ones that occur when using Half-duplex Based Ethernet. As shown in Figure 7, each Avionics Subsystem— autopilot, heads-up display, etc.
The switch comprises all the components contained in the large box. The switch is able to buffer packets for both reception and transmission.
The packet is then copied into the Tx buffers, through the Memory Bus, and transmitted in FIFO order on the outgoing link to the selected Avionic Subsystem or to another switch. This type of switching architecture is referred to as store and forward. Consequently, with this full-duplex switch architecture the contention encountered with half-duplex Ethernet is eliminated, simply because the architecture eliminates collisions. Theoretically, a Rx or Tx buffer could overflow, but if the buffer requirement in an avionics system are sized correctly, overflow can be avoided.
There are no collisions with full-duplex switched Ethernet, but packets may experience delay due to congestion in the switch. Instead of collisions and retransmissions, switching architecture may result in jitter, due to the random delay introduced by one packet waiting for another to be transmitted. The extent of jitter introduced by an End System and Switch must be controlled if deterministic behavior of the overall Avionics System is to be achieved.
In ARINC , a twisted pair must link every device that receives the azimuth signal form the inertial platform. In a system with many end points, point-to-point wiring is a major overhead. This can lead to some huge wiring harnesses, with the added weight that goes along with them. But in the case of AFDX, as shown in Figure 8b, each signal is connected to the switch only once so that no matter how many subsystems require the azimuth signal from the inertial platform, they need not be connected individually to the inertial platform.
With AFDX, the number of fan-outs from the inertial platform is limited only by the number of ports on the switch. Also, by cascading switches, the fan-out can be easily increased as needed. In general, an Avionics computer system is capable of supporting multiple Avionics subsystems.
Partitions provide isolation between Avionics subsystems within the same Avionics computer system. This isolation is achieved by restricting the address space of each partition and by placing limits on the amount of CPU time allotted to each partition. The objective is to ensure that an errant Avionics subsystem running in one partition will not affect subsystems running in other partitions.
Avionics applications communicate with each other by sending messages using communication ports. Accordingly, it is necessary that End Systems provide a suitable communications interface for supporting sampling and queuing ports. More about this in the next chapter.
Avionics Full-Duplex Switched Ethernet
In one abstraction, it is possible to visualise the VLs as an ARINC style network each with one source and one or more destinations. Virtual links are unidirectional logic paths from the source end-system to all of the destination end-systems. The virtual link ID is a bit unsigned integer value that follows a constant bit field. The switches are designed to route an incoming frame from one, and only one, end system to a predetermined set of end systems. There can be one or more receiving end systems connected within each virtual link. Each virtual link is allocated dedicated bandwidth [sum of all VL bandwidth allocation gap BAG rates x MTU ] with the total amount of bandwidth defined by the system integrator. However, total bandwidth cannot exceed the maximum available bandwidth on the network.
AFDX / ARINC664