A hardware h264 video encoder written in VHDL. Designed to be synthesized into an FPGA. Initial testing is using Xilinx tools and FPGAs but it is not specific to Xilinx.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">Zexia H.264 Hardware Encoder in VHDL
A hardware h264 video encoder written in VHDL suited to IP cameras and megapixel cameras. Designed to be synthesized into an FPGA. Initial testing is using Xilinx tools and FPGAs but it is not specific to Xilinx.
Source code and other files are released here under a BSD-style licence
This is a mirror of the original project located at http://hardh264.sourceforge.net/
The H.264 hardware encoder is designed as a modular system with small, efficient, low power components doing well defined tasks. The principle design aim was to make an scalable encoder for megapixel images suitable for use in camera heads and low power recorders.
The encoder is not designed to be all things to all people, but rather designed to efficiently implement a non-interlaced Base Profile with no limit to the number of streams or video resolution.
As such few generic parameters are provided, but components can be replaced as needed to customize the encoder to a specific application. For example, only CAVLC encoding is performed, but this is performed by the h264cavlc and h264header modules only. If required, these can be replaced by custom modules to perform another form of encoding.
A diagram of the principle components can be found below or in the CVS respository.
Video comes in at the top, and is usually written to (external) RAM to buffer it temporarily. When needed it is read into the prediction components such as intra4x4.
Outputs from the prediction components pour through the transform loop: coretransform, quantise, dequantise, invtransform, reconstruct. For intra encoding, these reconstructed pixels are required immediately to predict the next block, in addition they are also written to (external) RAM for use for the next inter coded frame.
Because of the feedback, especially for intra encoding, this transform loop is timing-critical, since the latency is important as well as the throughput. Transform modules need all data in before they can output the first output pixel, but the delay between the last pixel in and the first pixel out is minimal.
For blocks which use DC as well as AC components, a 2x2 DC transform (Hadamard transform) is also provided as part of the feedback loop. In order to speed the process, the intra8x8cc module (which encodes chroma for intra encoding) outputs the sums of each block as a separate DC data stream which is fed into the first dctransform.
Output from quantise is fed to the buffer which delays and reorders the blocks for output to the cavlc module which encodes the data. Header data is mixed in after cavlc, and the stream is turned into a byte stream by tobytes, which also stuffs 03 bytes to prevent startcode emulation. The output from tobytes is a NAL, and a done signal is asserted at the end.
It is up to higher level code to add Annex B startcodes between frames (00 00 00 01), or else count and buffer the bytes output and add a header in mp4 or rtp format, or other format as required.
Multiple streams may be simultaneously encoded by the H.264 encoder; these may be of different resolutions. There is no design limit to the video resolution.
The components are independently upgradeable if required.
The design has been compiled for the following devices:
Xilinx Spartan 3 family - (3174 Slices)
Altera Cyclone III family – (26,754 LEs)
It is likely to compile successfully for most other FPGA and ASIC technologies.
This encoder is being built into a commercial application which uses a Spartan 3A 1400 which is about 4 times the size of the requirement quoted above.
Note that the Cyclone III is rather larger than the Spartan 3 due to the use of small or unlatched RAM elements in the design which the Spartan can map to distributed RAM but the Cyclone needs to implement in discrete logic. A modification to intra4x4 and intra8x8cc components to permit TOPI to have a two-clock latency rather than one would permit latched RAM in this situation.
It is necessary to include Picture Parameters (PP) and Stream Parameters (SP) to specify the details of the encoder for the decoder to use. These are usually encoded as separate NAL units which can be transmitted immediately before the first NAL unit of image stream.
Some recommended Stream Parameters (SP) are:
** Replace the pic_width and pic_height with appropriate values, these are in 16-pixel-units and thus the parameters here encode an image of 352x288.
Some recommended Picture Parameters (PP) are:
There are two clocks in use by the modules, CLK which is nominally the pixel clock rate, with a design frequency of 60MHz or below, and CLK2 which is a double rate clock and should run at exactly twice CLK rate. CLK is used in the h264cavlc module and also by the back end which emits the byte stream. CLK2 is used by the prediction and transform logic which works at higher data rates than the pixel rate. As a result, 512 double rate clocks are available for each macroblock (256 pixels) for the prediction and transform logic feedback loop.
A skeleton top level is provided, to allow a real one to be written for your application. A simple top level is provided for simulation which reads and writes files and can dump intermediate data and an annotated output bit stream if required.
At minimum, an entire uncompressed reference image must be buffered in RAM to allow inter prediction[*]. Usually the incoming image is buffered as well as the reference image, so two copies might be needed. If encoding multiple streams, images from all streams need to be buffered. Depending on resolution, this might be a lot of memory, and thus it is anticipated that this is implemented off-chip.
[*] of course, you could use intra prediction only and thus avoid this overhead, but compression is then usually around 10:1, rather than the 50:1 quoted for inter compressed streams. The actual compression will vary with the contents of the picture stream.
The intra prediction currently available (intra4x4 and intra8x8cc) only considers a subset of possible modes, and uses a simple SAD (sum of absolute differences) comparison. This makes the intra frames a little larger than they otherwise might be, but tests against reference software (which can choose a wider range of modes and other better comparison computations) shows only a few percent improvement.
Also, only a simple inter prediction (p-frames) component is available at present. Zexia has others available but they cannot be released at present under an open source license; they will probably be released once commercial agreements expire. If you want to work on this, please drop me an email, since a pool of good inter prediction components will enhance this codec and I'd be pleased to help.
Since the target is Base Profile, no attempt has been made to encode B-frames, however the same cavlc and transform loop can be used so it's just a case of modifying the front end prediction and header generation.
H.264 is covered by patents, you will need a license from MPEG-LA. According to the MPEG-LA web site (http://www.mpegla.com), there is no royalty payable on less than 100,000 units.
The author knows of no other patents which cover this encoder, but that doesn't mean to say there are none (see the disclaimer below).
Zexia Access Ltd © 2008 - H.264 Hardware Encoder