Requirements

  • DE2-115

Downloads

 

Results (Students Beky and Sergio from Universidad Pontificia Bolivariana -Bucaramanga Colombia, Year 2015):






Introduction

Line buffers serve as a common tool for data buffering, allowing for the processing of a larger data window. This window is then subdivided into 3x3 pixel kernels, also referred to as neighborhoods. These small regions are typically employed to facilitate the application of digital filters to an image, effectively eliminating superfluous information and thereby simplifying the process of identifying specific features within the image.

In hardware implementations, the extraction of these kernels demands the use of delay lines. These lines are designed to maintain a minimum of 3 lines of horizontal synchronization in video signals, enabling the seamless extraction of pixels that constitute a neighborhood, whether it be 3x3, 4x4, 5x5, or any other desired size. This operation is crucial for precise image processing and analysis.

Using line buffers for image processing with FPGAs (Field-Programmable Gate Arrays) has its advantages and disadvantages:

Pros:

1. Data Buffering: Line buffers help in buffering image data, allowing for the processing of a larger window of data. This is particularly useful for operations that require a local context, such as convolution or filtering.

2. Parallelism: FPGAs can exploit the parallel processing capabilities of line buffers, leading to high-throughput image processing. This can significantly speed up certain operations.

3. Customization: FPGAs are highly customizable, and line buffers can be tailored to specific image processing requirements. This flexibility is valuable in various applications.

4. Low Latency: Line buffers can reduce data access latency as they store and provide quick access to the required pixel values, minimizing memory-related delays.

Cons:

1. Resource Utilization: Using line buffers in FPGA designs consumes valuable FPGA resources, such as Block RAM and logic elements. This can limit the number of other processing elements that can be implemented on the FPGA.

2. Complex Design: Designing and implementing line buffers in FPGA-based image processing systems can be complex. It requires expertise in FPGA programming and may lead to longer development times.

3. Limited Buffer Size: The size of line buffers is limited by the available FPGA resources. This can be a constraint when working with very large images or when dealing with algorithms that require substantial context.

4. Power Consumption: FPGAs can consume more power compared to other embedded solutions, which might be a concern in battery-powered or energy-efficient applications.

5. Cost: FPGAs can be relatively expensive compared to other processing platforms, and this cost can be a drawback in budget-sensitive projects.

In summary, the use of line buffers in image processing with FPGAs offers benefits in terms of data buffering, parallelism, customization, and low latency. However, it also comes with trade-offs related to resource utilization, complexity, buffer size, power consumption, and cost, which need to be carefully considered when choosing this approach for a specific application.

3x3 filters are commonly used in image processing, especially in FPGA-based applications, due to their simplicity and effectiveness. Here are some of the most common 3x3 filters and their purposes:

1. Identity Filter (No Change):
- Purpose: This filter is used to leave the image unchanged. It's often used for testing or as a placeholder in the filtering pipeline.
- Filter Kernel:

0 0 0
0 1 0
0 0 0

2. Box Blur Filter:
- Purpose: The box blur filter is used to reduce noise and create a smoothing effect in images.
- Filter Kernel:

1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9

3. Edge Detection - Sobel Filter:
- Purpose: The Sobel filter is used for edge detection, highlighting abrupt changes in intensity.
- Filter Kernel for Horizontal Edges:

-1 0 1
-2 0 2
-1 0 1

- Filter Kernel for Vertical Edges:

-1 -2 -1
0 0 0
1 2 1

4. Gaussian Blur Filter:
- Purpose: The Gaussian blur filter is used to reduce noise and smooth images with a Gaussian distribution.
- Filter Kernel:

1 2 1
2 4 2
1 2 1

5. Emboss Filter:
- Purpose: The emboss filter enhances the edges in an image, giving it a 3D effect.
- Filter Kernel:

-2 -1 0
-1 1 1
0 1 2

6. Sharpening Filter:
- Purpose: The sharpening filter enhances the edges and fine details in an image.
- Filter Kernel:

0 -1 0
-1 5 -1
0 -1 0

7. High-Pass Filter (Edge Enhancement):
- Purpose: The high-pass filter enhances the high-frequency components, such as edges, in an image.
- Filter Kernel:

-1 -1 -1
-1 9 -1
-1 -1 -1

These 3x3 filters are relatively simple and efficient for FPGA-based image processing because they involve a small neighborhood of pixels, making them computationally less intensive. FPGA implementations of these filters can take advantage of parallel processing capabilities, enabling real-time or near-real-time image filtering. The choice of filter depends on the specific image processing task and the desired outcome.

The image processing setup you see in the next images is a classic example of how digital image processing pipelines are structured. Let's break down the key components:

  1. Video Composite Module (ADV7180): This component is essentially a specialized piece of hardware, designed to connect with composite video sources like analog cameras or video inputs. Think of it as a bridge between the analog world and the digital realm. The ADV7180, which is a widely-used video decoder IC, plays a pivotal role here. It takes the analog video signal and transforms it into a digital format that's ready for further processing.

  2. 3 Line Buffer Verilog Module: Now, this Verilog module acts as a buffer for the incoming video data. It's a bit like a temporary storage space. What's unique about it is that it retains a minimum of three lines of video data. This comes in handy because it gives us a small, local "window" to work with when we're processing images. For operations like filtering and convolution, this local context is essential. It allows us to focus on small neighborhoods or kernels of data around each pixel, which is at the heart of many image processing tasks.

  3. Image Processing Filters Module: This is where the real image magic happens. This module is responsible for applying various image processing filters to our video data. What makes it particularly interesting is that it features an input mechanism with five switches. With these switches, you can select the specific image processing filter you want to apply. Each filter has its own unique impact on the video data. For instance, it can blur, enhance edges, sharpen, or do other exciting transformations.

Now, why is all of this important? This configuration is frequently used in applications that require on-the-fly image processing. Think of scenarios like video surveillance, medical imaging, or computer vision. The choice of using Verilog for the line buffer and FPGA-based image processing filters is strategic. It allows us to process video data efficiently and at high speeds. What's really cool is that we can switch between different filters using those switches. This flexibility empowers us to adapt to various image processing challenges and requirements.

 

 

 

 

This small image processing example is implemented on the DE2-115 platform, which comes equipped with a range of filters and features for image processing. Here's a breakdown of the available options based on the switches (SW) settings:

  1. SW = 5'd0: Component R

    • This operation extracts the red component of the image.
  2. SW = 5'd1: Component G

    • This operation extracts the green component of the image.
  3. SW = 5'd2: Component B

    • This operation extracts the blue component of the image.
  4. SW = 5'd3: Grayscale based on R

    • It converts the image to grayscale, using the red component as the luminance source.
  5. SW = 5'd4: Grayscale based on G

    • It converts the image to grayscale, using the green component as the luminance source.
  6. SW = 5'd5: Grayscale based on B

    • It converts the image to grayscale, using the blue component as the luminance source.
  7. SW = 5'd6: Luminosity

    • This operation calculates and outputs the luminosity of the image. It combines the RGB components based on standard weights.
  8. SW = 5'd7: Negative

    • It produces the negative of the input image, effectively inverting the pixel values.
  9. SW = 5'd8: Sepia

    • This operation applies a sepia filter to the image, giving it a warm, brownish tone.
  10. SW = 5'd9: Edge Detection (Gn)

    • It performs edge detection using the absolute values of the Gx and Gy components. The result highlights edges in the image.
  11. SW = 5'd10: Thresholding

    • This operation threshold the image based on the red component. If the red component is less than 550, it produces a grayscale image; otherwise, it outputs the original image.
  12. SW = 5'd11: Thresholding (G component)

    • Similar to the previous operation, but based on the green component for thresholding.
  13. SW = 5'd12: Thresholding (B component)

    • Similar to the previous operation, but based on the blue component for thresholding.
  14. SW = 5'd13: Average Filter

    • It applies an average filter to the image, smoothing it based on the average values of the neighboring pixels. 
  15. SW = 5'd14: Skin Detection

    • It performs skin detection based on YUV components. If the U component falls within a certain range (40 to 296), it outputs the original image; otherwise, it outputs a black image.

These features and filters provide a wide range of image processing capabilities, making the DE2-115 platform versatile for various applications, from color channel visualization to advanced image enhancements and edge detection

 

module image_processing(
    input VGA_CLK,
    input [4:0] SW,
    input [29:0] datain_x0y0,
    input [29:0] datain_x1y0,
    input [29:0] datain_x2y0,
    input [29:0] datain_x0y1,
    input [29:0] datain_x1y1,
    input [29:0] datain_x2y1,
    input [29:0] datain_x0y2,
    input [29:0] datain_x1y2,
    input [29:0] datain_x2y2,
    output reg [29:0] RGB_out
);

// Constants for color component weights
localparam WEIGHT_R = 100;
localparam WEIGHT_G = 95;
localparam WEIGHT_B = 82;

// Constants for luminosity calculation
localparam LUMA_R_WEIGHT = 30;
localparam LUMA_G_WEIGHT = 60;
localparam LUMA_B_WEIGHT = 11;

// Other constants
localparam GRAY_SHIFT = 2;
localparam SKIN_U_MIN = 40;
localparam SKIN_U_MAX = 296;

reg [9:0] rrsepia;
reg [9:0] ggsepia;
reg [9:0] bbsepia;
reg [19:0] lumir;
reg [19:0] lumig;
reg [19:0] lumib;
reg [16:0] lumin;
reg [39:0] Gx;
reg [39:0] Gy;
reg [9:0] Gn;
reg [9:0] prom;
reg [9:0] gris;
reg [9:0] Y;
reg [9:0] U;
reg [9:0] V;
reg [29:0] skin;



// Macro for RGB component selection
`define RGB_COMPONENT(component) {component, component, component}

// Macro for 3x3 kernel calculation
`define KERNEL_3X3(p00, p01, p02, p10, p11, p12, p20, p21, p22) (p00 + p01 + p02 + p10 + p11 + p12 + p20 + p21 + p22) / 9

always @(*)
begin
    // Calculate sepia components
    rrsepia = (datain_x0y0[29:20] * WEIGHT_R) / 100;
    ggsepia = (datain_x0y0[19:10] * WEIGHT_G) / 100;
    bbsepia = (datain_x0y0[9:0] * WEIGHT_B) / 100;

    // Calculate luminosity
    lumir = datain_x0y0[29:20] * LUMA_R_WEIGHT;
    lumig = datain_x0y0[19:10] * LUMA_G_WEIGHT;
    lumib = datain_x0y0[9:0] * LUMA_B_WEIGHT;
    lumin = lumir + lumig + lumib;

    // Calculate Gx and Gy for edge detection
    Gx = (datain_x2y0 + 2 * datain_x2y1 + datain_x2y2) - (datain_x0y0 + 2 * datain_x0y1 + datain_x0y2);
    Gy = (datain_x0y0 + 2 * datain_x1y0 + datain_x2y0) - (datain_x0y2 + 2 * datain_x1y2 + datain_x2y2);

    // Calculate Gn (absolute value of Gx and Gy)
    if (Gx[9] == 0 && Gy[9] == 0) begin
        Gn = Gx + Gy;
	end
    else if (Gx[9] == 1 && Gy[9] == 0)
	begin
        Gn = (~Gx) + Gy;
	end
    else if (Gx[9] == 0 && Gy[9] == 1)
	begin
        Gn = Gx + (~Gy);
	end
    else begin
        Gn = (~Gx) + (~Gy);
	end
    
    // Calculate YUV components
    Y = (datain_x0y0[29:20] + (2 * datain_x0y0[19:10]) + datain_x0y0[9:0]) / 4;
    U = datain_x0y0[29:20] - datain_x0y0[19:10];
    V = datain_x0y0[9:0] - datain_x0y0[19:10];

    // Skin detection based on YUV components
    if (U >= SKIN_U_MIN && U <= SKIN_U_MAX)
	begin
        skin = datain_x0y0;
	end
    else begin
        skin = 30'd0;
	end


    // Filter selection based on SW
    case (SW)
        5'd0: RGB_out = {datain_x0y0[29:20], 20'd0}; // Component R
        5'd1: RGB_out = {10'd0, datain_x0y0[19:10], 10'd0}; // Component G
        5'd2: RGB_out = {20'd0, datain_x0y0[9:0]}; // Component B
        5'd3: RGB_out = {datain_x0y0[29:20], datain_x0y0[29:20], datain_x0y0[29:20]}; // Grayscale based on R
        5'd4: RGB_out = {datain_x0y0[19:10], datain_x0y0[19:10], datain_x0y0[19:10]}; // Grayscale based on G
        5'd5: RGB_out = {datain_x0y0[9:0], datain_x0y0[9:0], datain_x0y0[9:0]}; // Grayscale based on B
        5'd6: RGB_out = {lumin[16:7], lumin[16:7], lumin[16:7]}; // Luminosity
        5'd7: RGB_out = {~datain_x0y0[29:20], ~datain_x0y0[19:10], ~datain_x0y0[9:0]}; // Negative
        5'd8: RGB_out = {rrsepia[9:0], ggsepia[9:0], bbsepia[9:0]}; // Sepia
        5'd9: RGB_out = {Gn[9:0], Gn[9:0], Gn[9:0]}; // Edge Detection (Gn)
        5'd10: RGB_out = (datain_x0y0[29:20] < 550) ? {lumin[16:7], lumin[16:7], lumin[16:7]} : datain_x0y0; // Thresholding
        5'd11: RGB_out = (datain_x0y0[19:10] < 550) ? {lumin[16:7], lumin[16:7], lumin[16:7]} : datain_x0y0; // Thresholding (G component)
        5'd12: RGB_out = (datain_x0y0[9:0] < 550) ? {lumin[16:7], lumin[16:7], lumin[16:7]} : datain_x0y0; // Thresholding (B component)
        5'd13: RGB_out = `RGB_COMPONENT(`KERNEL_3X3(datain_x0y0, datain_x1y0, datain_x2y0, datain_x0y1, datain_x1y1, datain_x2y1, datain_x0y2, datain_x1y2, datain_x2y2)); // Average Filter
        5'd14: RGB_out = skin; // Skin Detection

        default: RGB_out = datain_x0y0; // Default case
    endcase
end

endmodule

 

// --------------------------------------------------------------------
//
// Major Functions:	3 Line Buffer, for Image Kernels
//
// --------------------------------------------------------------------
//
// Revision History :
// --------------------------------------------------------------------
//   Ver  :| Author            :| Mod. Date :| Changes Made:
//   V1.0 :| Holguer A Becerra :| 17/15/02  :| Initial Revision
// --------------------------------------------------------------------


module linebuffer_3Lines(data,EN, clock, dataout,
	dataout_x0y0,
	dataout_x1y0,
	dataout_x2y0,
	dataout_x0y1,
	dataout_x1y1,
	dataout_x2y1,
	dataout_x0y2,
	dataout_x1y2,
	dataout_x2y2);
parameter NUMBER_OF_LINES=3;
parameter WIDTH=800;
parameter NUMBER_OF=NUMBER_OF_LINES*WIDTH;
parameter BUS_SIZE=30;
input clock;
input EN;
input [BUS_SIZE-1:0]data;
output [BUS_SIZE-1:0]dataout;
output [BUS_SIZE-1:0]dataout_x0y0;
output [BUS_SIZE-1:0]dataout_x1y0;
output [BUS_SIZE-1:0]dataout_x2y0;
output [BUS_SIZE-1:0]dataout_x0y1;
output [BUS_SIZE-1:0]dataout_x1y1;
output [BUS_SIZE-1:0]dataout_x2y1;
output [BUS_SIZE-1:0]dataout_x0y2;
output [BUS_SIZE-1:0]dataout_x1y2;
output [BUS_SIZE-1:0]dataout_x2y2;





reg [BUS_SIZE-1:0]fp_delay[0:NUMBER_OF-1];

assign dataout[BUS_SIZE-1:0]=fp_delay[NUMBER_OF-1][BUS_SIZE-1:0];

always@(posedge clock)
begin
	if(EN)fp_delay[0][BUS_SIZE-1:0]<=data[BUS_SIZE-1:0];
	else fp_delay[0][BUS_SIZE-1:0]<=fp_delay[0][BUS_SIZE-1:0];
end

genvar index;
generate

for (index=NUMBER_OF-1; index >= 1; index=index-1)
	begin: delay_generate
			always@(posedge clock)
				begin
					if(EN)fp_delay[index][BUS_SIZE-1:0]<=fp_delay[index-1][BUS_SIZE-1:0];
					else fp_delay[index][BUS_SIZE-1:0]<=fp_delay[index][BUS_SIZE-1:0];
				end
	
	end
endgenerate
//
assign dataout_x0y0[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-1)][BUS_SIZE-1:0];
assign dataout_x1y0[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-2)][BUS_SIZE-1:0];
assign dataout_x2y0[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-3)][BUS_SIZE-1:0];
assign dataout_x0y1[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-WIDTH-1)][BUS_SIZE-1:0];
assign dataout_x1y1[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-WIDTH-2)][BUS_SIZE-1:0];
assign dataout_x2y1[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-WIDTH-3)][BUS_SIZE-1:0];
assign dataout_x0y2[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-(2*WIDTH)-1)][BUS_SIZE-1:0];
assign dataout_x1y2[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-(2*WIDTH)-2)][BUS_SIZE-1:0];
assign dataout_x2y2[BUS_SIZE-1:0]=fp_delay[(NUMBER_OF-(2*WIDTH)-3)][BUS_SIZE-1:0];




endmodule	

 

Now let's add other functions like Box Blur Filter, Gaussian Blur Filter, Emboss Filter, Sharpening Filter and High-Pass Filter (Edge Enhancement) to the Verilog module "image_processing.v"

module image_processing(
    input VGA_CLK,
    input [4:0] SW,
    input [29:0] datain_x0y0,
    input [29:0] datain_x1y0,
    input [29:0] datain_x2y0,
    input [29:0] datain_x0y1,
    input [29:0] datain_x1y1,
    input [29:0] datain_x2y1,
    input [29:0] datain_x0y2,
    input [29:0] datain_x1y2,
    input [29:0] datain_x2y2,
    output reg [29:0] RGB_out
);

// Constants for color component weights
localparam WEIGHT_R = 100;
localparam WEIGHT_G = 95;
localparam WEIGHT_B = 82;

// Constants for luminosity calculation
localparam LUMA_R_WEIGHT = 30;
localparam LUMA_G_WEIGHT = 60;
localparam LUMA_B_WEIGHT = 11;

// Other constants
localparam GRAY_SHIFT = 2;
localparam SKIN_U_MIN = 40;
localparam SKIN_U_MAX = 296;

reg [9:0] rrsepia;
reg [9:0] ggsepia;
reg [9:0] bbsepia;
reg [19:0] lumir;
reg [19:0] lumig;
reg [19:0] lumib;
reg [16:0] lumin;
reg [39:0] Gx;
reg [39:0] Gy;
reg [39:0] Gn;
reg [9:0] prom;
reg [9:0] gris;
reg [9:0] Y;
reg [9:0] U;
reg [9:0] V;
reg [29:0] skin;


// New Image Processing Filters
reg [19:0] box_blur_r;
reg [19:0] box_blur_g;
reg [19:0] box_blur_b;
reg [9:0] gaussian_blur_r;
reg [9:0] gaussian_blur_g;
reg [9:0] gaussian_blur_b;
reg [9:0] emboss_r;
reg [9:0] emboss_g;
reg [9:0] emboss_b;
reg [9:0] sharpen_r;
reg [9:0] sharpen_g;
reg [9:0] sharpen_b;
reg [9:0] high_pass_r;
reg [9:0] high_pass_g;
reg [9:0] high_pass_b;


// Macro for RGB component selection
`define RGB_COMPONENT(component) {component, component, component}

// Macro for 3x3 kernel calculation
`define KERNEL_3X3(p00, p01, p02, p10, p11, p12, p20, p21, p22) (p00 + p01 + p02 + p10 + p11 + p12 + p20 + p21 + p22) / 9

always @(*)
begin
    // Calculate sepia components
    rrsepia = (datain_x0y0[29:20] * WEIGHT_R) / 100;
    ggsepia = (datain_x0y0[19:10] * WEIGHT_G) / 100;
    bbsepia = (datain_x0y0[9:0] * WEIGHT_B) / 100;

    // Calculate luminosity
    lumir = datain_x0y0[29:20] * LUMA_R_WEIGHT;
    lumig = datain_x0y0[19:10] * LUMA_G_WEIGHT;
    lumib = datain_x0y0[9:0] * LUMA_B_WEIGHT;
    lumin = lumir + lumig + lumib;

    // Calculate Gx and Gy for edge detection
    Gx = (datain_x2y0 + 2 * datain_x2y1 + datain_x2y2) - (datain_x0y0 + 2 * datain_x0y1 + datain_x0y2);
    Gy = (datain_x0y0 + 2 * datain_x1y0 + datain_x2y0) - (datain_x0y2 + 2 * datain_x1y2 + datain_x2y2);

    // Calculate Gn (absolute value of Gx and Gy)
    if (Gx[9] == 0 && Gy[9] == 0) begin
        Gn = Gx + Gy;
	end
    else if (Gx[9] == 1 && Gy[9] == 0)
	begin
        Gn = (~Gx) + Gy;
	end
    else if (Gx[9] == 0 && Gy[9] == 1)
	begin
        Gn = Gx + (~Gy);
	end
    else begin
        Gn = (~Gx) + (~Gy);
	end
    
    // Calculate YUV components
    Y = (datain_x0y0[29:20] + (2 * datain_x0y0[19:10]) + datain_x0y0[9:0]) / 4;
    U = datain_x0y0[29:20] - datain_x0y0[19:10];
    V = datain_x0y0[9:0] - datain_x0y0[19:10];

    // Skin detection based on YUV components
    if (U >= SKIN_U_MIN && U <= SKIN_U_MAX)
	begin
        skin = datain_x0y0;
	end
    else begin
        skin = 30'd0;
	end



	// Calculate Box Blur Filter
    box_blur_r = (datain_x0y0[29:10] + datain_x1y0[29:10] + datain_x2y0[29:10] + datain_x0y1[29:10] + datain_x1y1[29:10] + datain_x2y1[29:10] + datain_x0y2[29:10] + datain_x1y2[29:10] + datain_x2y2[29:10]) >> 3;
    box_blur_g = (datain_x0y0[19:0] + datain_x1y0[19:0] + datain_x2y0[19:0] + datain_x0y1[19:0] + datain_x1y1[19:0] + datain_x2y1[19:0] + datain_x0y2[19:0] + datain_x1y2[19:0] + datain_x2y2[19:0]) >> 3;
    box_blur_b = (datain_x0y0[9:0] + datain_x1y0[9:0] + datain_x2y0[9:0] + datain_x0y1[9:0] + datain_x1y1[9:0] + datain_x2y1[9:0] + datain_x0y2[9:0] + datain_x1y2[9:0] + datain_x2y2[9:0]) >> 3;

    // Calculate Gaussian Blur Filter (using 3x3 kernel)
    gaussian_blur_r = (datain_x0y0[29:0] + 2 * (datain_x1y0[29:0] + datain_x0y1[29:0] + datain_x2y0[29:0] + datain_x0y2[29:0]) + 4 * (datain_x1y1[29:0]) + datain_x2y1[29:0] + datain_x1y2[29:0] + datain_x2y2[29:0]) >> 6;
    gaussian_blur_g = (datain_x0y0[29:0] + 2 * (datain_x1y0[29:0] + datain_x0y1[29:0] + datain_x2y0[29:0] + datain_x0y2[29:0]) + 4 * (datain_x1y1[29:0]) + datain_x2y1[29:0] + datain_x1y2[29:0] + datain_x2y2[29:0]) >> 6;
    gaussian_blur_b = (datain_x0y0[29:0] + 2 * (datain_x1y0[29:0] + datain_x0y1[29:0] + datain_x2y0[29:0] + datain_x0y2[29:0]) + 4 * (datain_x1y1[29:0]) + datain_x2y1[29:0] + datain_x1y2[29:0] + datain_x2y2[29:0]) >> 6;

    // Calculate Emboss Filter
    emboss_r = (datain_x0y0[29:0] - datain_x0y2[29:0] + datain_x2y0[29:0] - datain_x2y2[29:0]) >> 1;
    emboss_g = (datain_x0y0[29:0] - datain_x0y2[29:0] + datain_x2y0[29:0] - datain_x2y2[29:0]) >> 1;
    emboss_b = (datain_x0y0[29:0] - datain_x0y2[29:0] + datain_x2y0[29:0] - datain_x2y2[29:0]) >> 1;

    // Calculate Sharpening Filter
    sharpen_r = datain_x0y0[29:0] + 2 * (datain_x0y1[29:0] + datain_x1y0[29:0] + datain_x1y2[29:0] + datain_x2y1[29:0]) - 6 * datain_x1y1[29:0];
    sharpen_g = datain_x0y0[29:0] + 2 * (datain_x0y1[29:0] + datain_x1y0[29:0] + datain_x1y2[29:0] + datain_x2y1[29:0]) - 6 * datain_x1y1[29:0];
    sharpen_b = datain_x0y0[29:0] + 2 * (datain_x0y1[29:0] + datain_x1y0[29:0] + datain_x1y2[29:0] + datain_x2y1[29:0]) - 6 * datain_x1y1[29:0];

    // Calculate High-Pass Filter (Edge Enhancement)
    high_pass_r = 5 * datain_x1y1[29:0] - datain_x0y0[29:0] - datain_x1y0[29:0] - datain_x2y0[29:0] - datain_x0y1[29:0] - datain_x0y2[29:0] - datain_x1y2[29:0] - datain_x2y2[29:0];
    high_pass_g = 5 * datain_x1y1[29:0] - datain_x0y0[29:0] - datain_x1y0[29:0] - datain_x2y0[29:0] - datain_x0y1[29:0] - datain_x0y2[29:0] - datain_x1y2[29:0] - datain_x2y2[29:0];
    high_pass_b = 5 * datain_x1y1[29:0] - datain_x0y0[29:0] - datain_x1y0[29:0] - datain_x2y0[29:0] - datain_x0y1[29:0] - datain_x0y2[29:0] - datain_x1y2[29:0] - datain_x2y2[29:0];
    // Filter selection based on SW
    case (SW)
        5'd0: RGB_out = {datain_x0y0[29:20], 20'd0}; // Component R
        5'd1: RGB_out = {10'd0, datain_x0y0[19:10], 10'd0}; // Component G
        5'd2: RGB_out = {20'd0, datain_x0y0[9:0]}; // Component B
        5'd3: RGB_out = {datain_x0y0[29:20], datain_x0y0[29:20], datain_x0y0[29:20]}; // Grayscale based on R
        5'd4: RGB_out = {datain_x0y0[19:10], datain_x0y0[19:10], datain_x0y0[19:10]}; // Grayscale based on G
        5'd5: RGB_out = {datain_x0y0[9:0], datain_x0y0[9:0], datain_x0y0[9:0]}; // Grayscale based on B
        5'd6: RGB_out = {lumin[16:7], lumin[16:7], lumin[16:7]}; // Luminosity
        5'd7: RGB_out = {~datain_x0y0[29:20], ~datain_x0y0[19:10], ~datain_x0y0[9:0]}; // Negative
        5'd8: RGB_out = {rrsepia[9:0], ggsepia[9:0], bbsepia[9:0]}; // Sepia
        5'd9: RGB_out = {Gn[9:0], Gn[9:0], Gn[9:0]}; // Edge Detection (Gn)
        5'd10: RGB_out = (datain_x0y0[29:20] < 550) ? {lumin[16:7], lumin[16:7], lumin[16:7]} : datain_x0y0; // Thresholding
        5'd11: RGB_out = (datain_x0y0[19:10] < 550) ? {lumin[16:7], lumin[16:7], lumin[16:7]} : datain_x0y0; // Thresholding (G component)
        5'd12: RGB_out = (datain_x0y0[9:0] < 550) ? {lumin[16:7], lumin[16:7], lumin[16:7]} : datain_x0y0; // Thresholding (B component)
        5'd13: RGB_out = `RGB_COMPONENT(`KERNEL_3X3(datain_x0y0, datain_x1y0, datain_x2y0, datain_x0y1, datain_x1y1, datain_x2y1, datain_x0y2, datain_x1y2, datain_x2y2)); // Average Filter
        5'd14: RGB_out = skin; // Skin Detection

        5'd15: RGB_out = {box_blur_r[19:10], box_blur_g[19:0], box_blur_b[9:0]}; // Box Blur Filter
        5'd16: RGB_out = {gaussian_blur_r[9:0], gaussian_blur_g[9:0], gaussian_blur_b[9:0]}; // Gaussian Blur Filter
        5'd17: RGB_out = {emboss_r[9:0], emboss_g[9:0], emboss_b[9:0]}; // Emboss Filter
        5'd18: RGB_out = {sharpen_r[9:0], sharpen_g[9:0], sharpen_b[9:0]}; // Sharpening Filter
        5'd19: RGB_out = {high_pass_r[9:0], high_pass_g[9:0], high_pass_b[9:0]}; // High-Pass Filter
        default: RGB_out = datain_x0y0; // Default case
    endcase
end

endmodule

 

The incorporation of additional image processing functions greatly enhances the versatility and utility of the image_processing module. This module now offers a wide range of image manipulation capabilities, allowing for the transformation and enhancement of images in various ways.

The original set of operations, including color component extraction (R, G, B), grayscale conversion, luminosity calculation, negative image creation, sepia tone effect, edge detection (Sobel), thresholding, average filtering, and skin detection, remains integral to the module's functionality.

In addition to these foundational operations, the inclusion of new filters extends the possibilities for image processing. These filters include:

  • Box Blur Filter: This filter smoothens the image by averaging the color values within a 3x3 neighborhood, reducing noise and creating a gentle blur effect.

  • Gaussian Blur Filter: The Gaussian blur produces a more natural-looking blur effect by applying a weighted average within a 3x3 neighborhood.

  • Emboss Filter: This filter enhances the edges and depth of objects in the image, creating a 3D-like embossed effect.

  • Sharpening Filter: Sharpening increases image contrast by emphasizing edges and fine details, making the image appear crisper.

  • High-Pass Filter (Edge Enhancement): The high-pass filter enhances edge contrast by amplifying high-frequency components while suppressing low-frequency elements.

These additional operations expand the image_processing module's applicability across a wider range of image enhancement and manipulation tasks. Whether it's achieving a smooth, blurred effect, enhancing edges and fine details, or creating embossed or sharpened images, this module empowers users with a comprehensive toolset for image processing.

With these capabilities, the image_processing module is a valuable asset for applications in fields like computer vision, medical imaging, video surveillance, and more, where real-time image processing and manipulation are essential. Its flexibility, when coupled with FPGA-based hardware, makes it a robust choice for meeting specific image processing requirements.

 

Conclusion

Three-line delay buffers play a crucial role in image processing for several reasons:

  1. Local Context: Image processing often involves analyzing a pixel in relation to its neighboring pixels. A three-line buffer provides a local context by storing a small region of the image. This context is essential for operations like convolution and filtering, allowing us to process pixels in their neighborhood and achieve accurate results.

  2. Edge Detection: Many image processing tasks, such as edge detection, rely on identifying abrupt changes in pixel intensity. A three-line buffer enables the examination of pixel values across multiple lines, which is fundamental for detecting these intensity transitions and edges effectively.

  3. Efficiency: Processing the entire image frame at once can be resource-intensive and time-consuming. Line buffers allow for processing data in smaller, manageable chunks, which is more efficient in terms of memory and computational resources.

  4. Parallel Processing: FPGAs and other hardware platforms often use line buffers to facilitate parallel processing. By having access to a local context, it becomes easier to process multiple pixels simultaneously. This parallelism significantly speeds up image processing operations.

  5. Real-Time Processing: In applications where real-time or near-real-time image processing is essential, line buffers ensure that image data can be processed without delay, as they maintain a continuous stream of data for analysis.

In summary, three-line delay buffers are vital for image processing because they provide a context for pixel analysis, support various image processing tasks, improve efficiency, enable parallel processing, and make real-time processing feasible. These buffers are foundational components in image processing pipelines, helping to enhance the quality and speed of image analysis and manipulation.