Graphics Systems
- Most modern graphics systems
- Input devices
- CPU
- GPU - Graphics Processing Unit
- Memory
- Frame Buffer
- Output Devices
- Note the separation of I/O devices
- We pull the GPU out,
- And frame buffer probably never shown up before.
- Look at the picture on the bottom of page 5.
- Pixel stands for picture element.
- It is a a single "square" (we hope) in a display
- Pixels come from early output devices.
- A raster is a two dimensional array of pixels.
- In the end, this is what MOST output devices are capable of displaying.
- A chunk of memory, in which the individual pixels are stored is called a frame buffer.
- The frame buffer contains information such as
- Pixel color
- Current pixel depth
- The size of the frame buffer is determined by
- The number of pixels (width x height)
- The information stored at each pixel
- The depth - perhaps an int
- Color - perhaps an int (or a short x 3 or 4)
- As memory has become less expensive, frame buffers have changed over the years.
- But the basic idea remains the same.
- In the past the CPU has been responsible for most Processing
- Currently GPSs have taken over much of this responsibility.
- Graphics primitives such as lines, circles, polygons, ... are specified
- Some entity converts from the primitive to a series of pixels.
- This is called rasterization or scan conversion
- These converted values are written to the frame buffer.
- Output Devices
- The nearly extinct CRT - Cathode-ray tube.
- Screen coated with phosphor
- A beam of electrons strikes the screen causing it to glow for a period.
- There are electronics to control where the beam is striking the screen.
- There are electronics which control if the beam is on or not.
- The refresh rate is the time it takes cover the entire screen. (30-120 times per second)
- This combined with the frame buffer should provide a clue how monochrome images are displayed.
- Grey scale displays
- Color displays
- Plasma Displays, LCD, ...
- Most printers
- A different beast: Plotters
- Have a physical pen that moves by a robot arm.
- Better to deal with vector graphics.
- Work like we do.
- Input Devices
- Two basic categories, plus new things.
- Keystroke and button press devices
- Pointing devices
- For keystroke, and button press devices,
- we are informed when the state of the buttons have changed.
- Sometimes this is "the X button has been pressed"
- Sometimes this is
- button press
- Button release
- The operating system will buffer these for us (usually)
- For pointing devices
- Mostly we get changes in position.
- Look at the mouse.
- A mouse reports changes in two dimensions.
- My space mouse reports changes in six dimensions. (or degrees of freedom)
- These are both relative positioning devices.
- A joystick, or even a steering wheel work the same way.
- Devices like data tablets and touch screens are the exception
- These are absolute positioning devices.
- But most systems provide a layer between the raw devices (or even the raw devices as they appear to the os) and the programmer
- Logical devices
- This layer allows programs to be developed independent of the device.
- For any device, we generally only get information when a action occurs within our area of the screen.
- Frequently we get coordinates relative to our location on the screen.
- Three modes for input devices
- Request mode - the way you write programs in 130/230/330 ...
- Sample mode - Ask the device for the current state.
- Event mode - when the device changes, an event is sent to your program