  HOWTO-framebuffer
  Alex Buell, alex.buell@tahallah.demon.co.uk
  v1.0-pre3, 9 October 1998

  This document describes how to use the framebuffer devices in Linux
  with a variety of platforms including Intel and m68k systems.
  ______________________________________________________________________

  Table of Contents

























































  1. Contributors

  2. What is a framebuffer device?

  3. What advantages does framebuffer devices have?

  4. Using framebuffer devices on Intel platforms

     4.1 What is vesafb?
     4.2 How do I activate the vesafb drivers?
     4.3 What VESA modes are available to me?
     4.4 Is there a X11 driver for vesafb?
     4.5 Which graphic cards are VESA 2.0 compliant?
     4.6 Can I make vesafb as a module?
     4.7 How do I modify the cursor?

  5. Using framebuffer devices on Atari m68k platforms

     5.1 What modes are available on Atari m68k platforms?
     5.2 Additional suboptions on Atari m68k platforms
     5.3 Using the internal suboption on Atari m68k platforms
     5.4 Using the external suboption on Atari m68k platforms

  6. Using framebuffer devices on Amiga m68k platforms

     6.1 What modes are available for Amiga m68k platforms?
     6.2 Additional suboptions on Amiga m68k platforms
     6.3 Supported Amiga graphic expansion boards

  7. Using framebuffer devices on Macintosh m68k platforms

  8. Using framebuffer devices on PowerPC platforms

  9. Using framebuffer devices on Alpha platforms

     9.1 What modes are available to me?
     9.2 Which graphic cards can work with the frambuffer device?

  10. Using framebuffer devices on SPARC platforms

     10.1 Which graphic cards can work with the framebuffer device?
     10.2 Configuring the framebuffer devices

  11. Using framebuffer devices on MIPS platforms

  12. Using KGICON framebuffer drivers

     12.1 What is KGICON?
     12.2 What can KGICON do that vesafb cannot?
     12.3 What can vesafb do that KGICON cannot?
     12.4 Which hardware does KGICON support?
     12.5 Where can I get KGICON from?
     12.6 How do I install KGICON?
     12.7 Is KGICON going into the kernel?
     12.8 Is there going to be KGICON support for non-Intel platforms?
     12.9 Contact Information


  ______________________________________________________________________

  11..  CCoonnttrriibbuuttoorrss

  Thanks go to these people listed below who helped improve the HOWTO
  framebuffer.


  +o  Jeff Noxon jeff@planetfall.com

  +o  Francis Devereux f.devereux@cs.ucl.ac.uk

  +o  Andreas Ehliar ehliar@futurniture.se

  +o  Martin McCarthy marty@ehabitat.demon.co.uk

  +o  Simon Kenyon simon@koala.ie

  +o  David Ford david@kalifornia.com

  +o  Chris Black cblack@cmpteam4.unil.ch

  +o  N Becker nbecker@fred.net

  +o  Bob Tracy rct@gherkin.sa.wlk.com

  +o  Marius Hjelle marius.hjelle@roman.uib.no

  +o  James Cassidy jcassidy@misc.dyn.ml.org

  +o  Andreas U. Trottmann andreas.trottmann@werft22.com

  +o  Lech Szychowski lech7@lech.pse.pl

  +o  Aaron Tiensivu tiensivu@pilot.msu.edu

  +o  Many others too numerous to add, but thanks!

  Thanks to these people listed below who built libc5/glibc2 versions of
  the XF86_FBdev X11 framebuffer driver for X11 on Intel platforms:


  +o  Brion Vibber brion@pobox.com

  +o  Gerd Knorr kraxel@cs.tu-berlin.de

  and of course the authors of the framebuffer devices:


  +o  Martin Schaller - original author of the framebuffer concept

  +o  Roman Hodek Roman.Hodek@informatik.uni-erlangen.de

  +o  Andreas Schwab schwab@issan.informatik.uni-dortmund.de

  +o  Guenther Kelleter

  +o  Geert Uytterhoeven Geert.Uytterhoeven@cs.kuleuven.ac.be

  +o  Roman Zippel roman@sodom.obdg.de

  +o  Pavel Machek pavel@atrey.karlin.mff.cuni.cz

  +o  Gerd Knorr kraxel@cs.tu-berlin.de

  +o  Miguel de Icaza miguel@nuclecu.unam.mx

  +o  David Carter carter@compsci.bristol.ac.uk

  +o  William Rucklidge wjr@cs.cornell.edu

  +o  Jes Sorensen jds@kom.auc.dk


  +o  Sigurdur Asgeirsson

  +o  Jeffrey Kuskin jsk@mojave.stanford.edu

  +o  Michal Rehacek michal.rehacek@st.mff.cuni.edu

  +o  Peter Zaitcev zaitcev@lab.ipmce.su

  +o  David S. Miller davem@dm.cobaltmicro.com

  +o  Dave Redman djhr@tadpole.co.uk

  +o  Jay Estabrook

  +o  Martin Mares mj@ucw.cz

  +o  Dan Jacobowitz dan@debian.org

  +o  Emmanuel Marty core@ggi-project.org

  +o  Eddie C. Dost ecd@skynet.be

  +o  Jakub Jelinek jj@ultra.linux.cz

  +o  Phil Blundell philb@gnu.org

  +o  Anyone else, stand up and be counted. :o)

  Thanks goes to Jon Taylor who provided the information for the KGICON
  drivers.


  22..  WWhhaatt iiss aa ffrraammeebbuuffffeerr ddeevviiccee??


  A framebuffer device is an abstraction for the graphic hardware. It
  represents the frame buffer of some video hardware, and allows
  application software to access the graphic hardware through a well-
  defined interface, so that the software doesn't need to know anything
  about the low-level interface stuff [Taken from Geert Uytterhoeven's
  framebuffer.txt in the linux kernel sources]


  33..  WWhhaatt aaddvvaannttaaggeess ddooeess ffrraammeebbuuffffeerr ddeevviicceess hhaavvee??



  Penguin logo. :o) Seriously, the major advantage of the framebuffer
  drives is that it presents a generic interface across all platforms.
  It was the case until late in the v2.1.x kernel development process
  that the Intel platform had console drivers completely different from
  the other console drivers for other platforms. With the introduction
  of v2.1.109 all this has changed for the better, and introduced more
  uniform handling of the console under the Intel platforms and also
  introduced true bitmapped graphical consoles bearing the Penguin logo
  on Intel for the first time, and allowed code to be shared across
  different platforms. Note that v2.0.x kernels do not support
  framebuffer devices, but it is possible someday someone will backport
  the code from the 2.1.x kernels to 2.0.x kernels.  There is an
  exception to that rule in that the v0.9.x kernel port for m68k
  platforms does have the framebuffer device support included.


  +o  v0.9.x (m68k) - introduced m68k framebuffer devices.


  +o  v2.1.107 - introduced Intel framebuffer/new console devices and
     added generic support, without scrollback buffer support.

  +o  v2.1.113 - scrollback buffer support added to vgacon.

  +o  v2.1.116 - scrollback buffer support added to vesafb.

  There are some cool features of the framebuffer devices, in that you
  can give generic options to the kernel at bootup-time, including
  options specific to a particular framebuffer device. These are:


  +o  video=xxx:off - disable probing for a particular framebuffer device

  +o  video=map:octal-number - maps the virtual consoles (VCs) to
     framebuffer (FB) devices

  +o  video=map:01 will map VC0 to FB0, VC1 to FB1, VC2 to FB0, VC3 to
     FB1..

  +o  video=map:0132 will map VC0 to FB0, VC1 to FB1, VC2 to FB3, VC4 to
     FB2, VC5 to FB0..

  Normally framebuffer devices are probed for in the order specified in
  the kernel, but by specifying the video=xxx option, you can add the
  specific framebuffer device you want probed before the others
  specified in the kernel.


  44..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn IInntteell ppllaattffoorrmmss

  44..11..  WWhhaatt iiss vveessaaffbb??


  Vesafb is a framebuffer driver for Intel architecture that works with
  VESA 2.0 compliant graphic cards. It is closely related to the
  framebuffer device drivers in the kernel.

  vesafb is a display driver that enables the use of graphical modes on
  your Intel platform for bitmapped text consoles. It can also display a
  logo, which is probably the main reason why you'd want to use vesafb
  :o)

  Unfortunately, you can not use vesafb successfully with VESA 1.2
  cards.  This is because these 1.2 cards do not use _l_i_n_e_a_r frame
  buffering.  Linear frame buffering simply means that the system's CPU
  is able to access every bit of the display. Historically, older
  graphic adapters could allow the CPU to access only 64K at a time,
  hence the limitations of the dreadful CGA/EGA graphic modes! It may be
  that someone will write a vesafb12 device driver for these cards, but
  this will use up precious kernel memory and involve a nasty hack.

  There is however a potential workaround to add VESA 2.0 extensions for
  your legacy VESA 1.2 card. You may be able to download a TSR type
  program that will run from DOS, and used in cojunction with loadlin,
  can help configure the card for the appropriate graphic console modes.
  Note that this will not always work, as an example some Cirrus Logic
  cards such as the VLB 54xx series are mapped to a range of memory
  addresses (for example, within the 15MB-16MB range) for frame
  buffering which preludes these from being used successfully with
  systems that have more than 32MB of memory. There is a way to make
  this work, i.e. if you have a BIOS option to leave a memory hole at
  15MB-16MB range, it might work, Linux doesn't support the use of
  memory holes. However there are patches for this option though [Who
  has these and where do one gets them from?]. If you wish to experiment
  with this option, there are plenty of TSR style programs available, a
  prime example is UNIVBE, which can be found on the Internet.


  44..22..  HHooww ddoo II aaccttiivvaattee tthhee vveessaaffbb ddrriivveerrss??

  Assuming you are using menuconfig, you will need to do the following
  steps:

  Go into the Code Maturity Level menu, and enable the prompt for
  development and/or incomplete drivers [note this may change for future
  kernels - when this happens, this HOWTO will be revised]

  Go into the Console Drivers menu, and enable the following:


  +o  VGA Text Console

  +o  Video Selection Support

  +o  Support for frame buffer devices (experimental)

  +o  VESA VGA Graphic console

  +o  Advanced Low Level Drivers

  +o  Select Mono, 2bpp, 4bpp, 8bpp, 16bpp, 24bpp and 32bpp packed pixel
     drivers

  VGA Chipset Support (text only) - vgafb - used to be part of the list
  above, but it has been removed as it is now deprecated and no longer
  supported. It will be removed shortly. Use VGA Text Console (fbcon)
  instead. VGA Character/Attributes is only used with VGA Chipset
  Support, and doesn't need to be selected.

  Ensure that the Mac variable bpp packed pixel support is not enabled.
  Linux kernel release v2.1.111 (and 112) seemed to enable this
  automatically if Advanced Low Level Drivers was selected for the first
  time. This no longer happens with v2.1.113.

  Make sure these aren't going to be modules. [Not sure if it's possible
  to build them as modules yet - please correct me on this]

  Then rebuild the kernel, modify /etc/lilo.conf to include the VGA=ASK
  parameter, and run lilo, this is required in order for you to be able
  to select the modes you wish to use.

  Reboot the kernel, and as a simple test, try entering 0301 at the VGA
  prompt (this will give you 640x480 @ 256), and you should be able to
  see a cute little Penguin logo.

  Once you can see that's working well, you can explore the various VESA
  modes (see below) and decide on the one that you like the best, and
  hardwire that into the "VGA=x" parameter in lilo.conf. When you have
  chosen the one you like the best, look up the decimal equivalent from
  the tables below and use the corresponding decimal number (i.e. for
  1280x1024 @ 256, you just use "VGA=775"), and re-run lilo. That's all
  there it is to it. For further references, read the LoadLin/LILO
  HOWTOs.

  _N_O_T_E_! vesafb does not enable scrollback buffering as a default. You
  will need to pass to the kernel the option to enable it. Use
  video=vesa:ypan or video=vesa:ywrap to activate it. Both does the same
  thing, but in different ways. ywrap is a lot faster than ypan but may
  not work on slightly broken VESA 2.0 graphic cards. ypan is slower
  than ywrap but a lot more compatible. This option is only present in
  kernel v2.1.116 and above. Earlier kernels did not have the ability to
  allow scrollback buffering in vesafb.


  44..33..  WWhhaatt VVEESSAA mmooddeess aarree aavvaaiillaabbllee ttoo mmee??

  This really depends on the type of VESA 2.0 compliant graphic card
  that you have in your system, and the amount of video memory
  available. This is just a matter of testing which modes work best for
  your graphic card.

  The following table shows the mode numbers you can input at the VGA
  prompt (actually these numbers are plus 0x200 to make it easier to
  refer to the table)


  Colours   640x400 640x480 800x600 1024x768 1280x1024 1600x1200
  --------+-----------------------------------------------------
   4 bits |    ?       ?     0302      ?        ?          ?
   8 bits |  0300    0301    0303     0305     0307      031C
  15 bits |    ?     0310    0313     0316     0319      031D
  16 bits |    ?     0311    0314     0317     031A      031E
  24 bits |    ?     0312    0315     0318     031B      031F
  32 bits |    ?       ?       ?        ?        ?         ?



  For convienence, here is the same table in decimal terms


  Colours   640x400 640x480 800x600 1024x768 1280x1024 1600x1200
  --------+-----------------------------------------------------
   4 bits |    ?       ?      770       ?        ?         ?
   8 bits |   768     769     771      773      775       796
  15 bits |    ?      784     787      790      793       797
  16 bits |    ?      785     788      791      794       798
  24 bits |    ?      786     789      792      795       799
  32 bits |    ?       ?       ?        ?        ?         ?



  Key: 8 bits = 256 colours, 15 bits = 32,768 colours, 16 bits = 65,536
  colours, 24 bits = 16.8 million colours, 32 bits - same as 24 bits,
  but the extra 8 bits can be used for other things, and fits perfectly
  with a 32 bit PCI/VLB/EISA bus.

  Additional modes are at the discretion of the manufacturer, as the
  VESA 2.0 document only defines modes up to 031F(799). You may need to
  do some fiddling around to find these extra modes.


  44..44..  IIss tthheerree aa XX1111 ddrriivveerr ffoorr vveessaaffbb??



  Yes, there is, actually. You will need to use the XF86_FBdev driver if
  for some reason your current X11 driver doesn't like vesafb. Go to
  http://www.xfree86.org, and download the X332servonly.tgz archive,
  unpack, and configure the drivers, following these steps


  +o  Edit xc/config/cf/xf86site.def, uncomment the #define for
     XF68FBDevServer

  +o  Comment out _a_l_l references to FB_VISUAL_STATIC_DIRECTCOLOR, as
     these are bogus and aren't used any more.

  +o  Edit xc/programs/Xserver/hw/xfree86/os-support/linux/lnx_io.c, and
     change K_RAW to K_MEDIUMRAW.

  and then build the driver. Don't worry about the m68k references, it
  supports Intel platforms. Then build the whole thing - it'll take a
  long time though as it's a large source tree.

  Alternatively, if you don't have the time to spare, you can obtain the
  binaries from the sites below. Please note that these are 'unofficial'
  builds and you use them at your risk.

  For libc5, use the one at:


  http://user.cs.tu-berlin.de/~kraxel/linux/XF68_FBDev.gz


  For glibc2, download from these URLs.


  http://user.cs.tu-berlin.de/~kraxel/linux/XF68_FBDev.libc6.gz
  http://pobox.com/~brion/linux/fbxserver.html



  There have been reports that X11 is non functional on certain graphic
  cards with this vesafb feature enabled, if this is happening, try the
  new XF86_FBdev driver for X11.

  This driver, along with vesafb can also help run X11 in higher graphic
  resolutions with certain graphic chipsets which are not supported by
  any of the current X11 drivers. Examples are MGA G-200 et. al.

  To configure the XF86_FBdev driver with your X11 system, you'll need
  to edit your XF86Config for the following:


  Section "Screen"
          Driver          "FBDev"
          Device          "Primary Card"
          Monitor         "Primary Monitor"
          SubSection      "Display"
                  Modes           "default"
          EndSubSection
  EndSection



  You'll also need to set XkbDisable in the keyboard section as well, or
  invoke the XF86_FBDev server with the '-kb' option to set up your
  keyboard so it works properly. If you forget to set XkbDisable, you
  will have to put the following lines in your .Xmodmap to straighten
  out the keyboard mappings. Alternatively, you can edit your xkb to
  reflect the list below.












  ! Keycode settings required
  keycode 104 = KP_Enter
  keycode 105 = Control_R
  keycode 106 = KP_Divide
  keycode 108 = Alt_R Meta_R
  keycode 110 = Home
  keycode 111 = Up
  keycode 112 = Prior
  keycode 113 = Left
  keycode 114 = Right
  keycode 115 = End
  keycode 116 = Down
  keycode 117 = Next
  keycode 118 = Insert
  keycode 119 = Delete



  You may need to do some fiddling around with this (try copying the
  original definition from the original X11 driver that you were using
  and editing the name of the driver to FBDev), but basically this is
  what you need to do to use the vesafb X11 driver.

  Hopefully the X11 problems with supported graphic cards will be fixed
  in future releases.


  44..55..  WWhhiicchh ggrraapphhiicc ccaarrddss aarree VVEESSAA 22..00 ccoommpplliiaanntt??

  This lists all the graphic cards that are known to work with the
  vesafb device:


  +o  ATI PCI VideoExpression 2MB (max. 1280x1024 @ 8bit)

  +o  ATI PCI All-in-Wonder

  +o  Matrox Millennium PCI - BIOS v3.0

  +o  Matrox Millennium II PCI - BIOS v1.5

  +o  Matrox Millennium II AGP - BIOS v1.4

  +o  Matrox Millennium G200 AGP - BIOS v1.3

  +o  Matrox Mystique & Mystique 220 PCI - BIOS v1.8

  +o  Matrox Mystique G200 AGP - BIOS v1.3

  +o  Matrox Productiva G100 AGP - BIOS v1.4

  +o  All Riva 128 based cards

  +o  Diamond Viper V330 PCI 4MB

  +o  Genoa Phantom 3D/S3 ViRGE/DX

  +o  Hercules Stingray 128/3D with TV output

  +o  Hercules Stingray 128/3D without TV output - needs BIOS upgrade
     (free from support@hercules.com)

  +o  SiS 6326 PCI/AGP 4MB

  +o  STB Lightspeed 128 (Nvida Riva 128 based) PCI

  +o  STB Velocity 128 (Nvida Riva 128 based) PCI

  +o  Jaton Video-58P ET6000 PCI 2MB-4MB (max. 1600x1200 @ 8bit)

  This list is composed of on-board chipsets on systems' motherboards:


  +o  Trident Cyber9397

  +o  SiS 5598

  This list below blacklists graphic cards that doesn't work with the
  vesafb device:


  +o  TBA


  44..66..  CCaann II mmaakkee vveessaaffbb aass aa mmoodduullee??



  As far as is known, vesafb can't be modularised, although at some
  point in time, the developer of vesafb may decide to modify the
  sources for modularising. Note that even if modularising is possible,
  at boot time you will not be able to see any output on the display
  until vesafb is _m_o_d_p_r_o_b_e_d. It's probably a lot wiser to leave it in
  the kernel, for these cases when there are booting problems.


  44..77..  HHooww ddoo II mmooddiiffyy tthhee ccuurrssoorr??


  [Taken from VGA-softcursor.txt - thanks Martin Mares!]

  Linux now has some ability to manipulate cursor appearance. Normally,
  you can set the size of hardware cursor (and also work around some
  ugly bugs in those miserable Trident cards--see #define TRIDENT_GLITCH
  in drivers/char/ vga.c). In case you enable "Software generated
  cursor" in the system configuration, you can play a few new tricks:
  you can make your cursor look like a non-blinking red block, make it
  inverse background of the character it's over or to highlight that
  character and still choose whether the original hardware cursor should
  remain visible or not.  There may be other things I have never thought
  of.

  The cursor appearance is controlled by a

  <ESC>[?1;2;3c


  sequence where 1, 2 and 3 are parameters described below. If you omit
  any of them, they will default to zeroes.

  Parameter 1 specifies cursor size (0=default, 1=invisible,
  2=underline, ..., 8=full block) + 16 if you want the software cursor
  to be applied + 32 if you want to always change the background colour
  + 64 if you dislike having the background the same as the foreground.
  Highlights are ignored for the last two flags.

  The second parameter selects character attribute bits you want to
  change (by simply XORing them with the value of this parameter). On
  standard VGA, the high four bits specify background and the low four
  the foreground. In both groups, low three bits set colour (as in
  normal colour codes used by the console) and the most significant one
  turns on highlight (or sometimes blinking--it depends on the
  configuration of your VGA).

  The third parameter consists of character attribute bits you want to
  set.  Bit setting takes place before bit toggling, so you can simply
  clear a bit by including it in both the set mask and the toggle mask.

  To get normal blinking underline, use: echo -e '\033[?2c' To get
  blinking block, use:            echo -e '\033[?6c' To get red non-
  blinking block, use:    echo -e '\033[?17;0;64c'


  55..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn AAttaarrii mm6688kk ppllaattffoorrmmss


  This section describes framebuffer options on Atari m68k platforms.


  55..11..  WWhhaatt mmooddeess aarree aavvaaiillaabbllee oonn AAttaarrii mm6688kk ppllaattffoorrmmss??



  Colours   320x200 320x480 640x200 640x400 640x480 896x608 1280x960
  --------+---------------------------------------------------------
   1 bit  |                         sthigh   vga2    falh2   tthigh
   2 bits |                 stmid            vga4
   4 bits | stlow                         ttmid/vga16 falh16
   8 bits |         ttlow                   vga256



  ttlow, ttmid and tthigh are only used by the TT, whilst vga2, vga4,
  vga15, vga256, falh3 and falh16 are only used by the Falcon.

  When used with the kernel option video=xxx, and no suboption is given,
  the kernel will probe for the modes in the following order until it
  finds a mode that is possible with the given hardware:


  +o  ttmid

  +o  tthigh

  +o  vga16

  +o  sthigh

  +o  stmid

  You may specify the particular mode you wish to use, if you don't wish
  to auto-probe for the modes you desire. For example, video=vga16 gives
  you a 4 bit 640x480 display.


  55..22..  AAddddiittiioonnaall ssuubbooppttiioonnss oonn AAttaarrii mm6688kk ppllaattffoorrmmss


  There are a number of suboptions available with the video=xxx
  parameter:


  +o  inverse - inverts the display so that the background/foreground
     colours are reversed. Normally the background is black, but with
     this suboption, it gets sets to white.

  +o  font - sets the font to use in text modes. Currently you can only
     select VGA8x8, VGA8x16, PEARL8x8. The default is to use the VGA8x8
     only if the vertical size of the display is less than 400 pixels,
     otherwise it defaults to VGA8x16.

  +o  internal - a very interesting option. See the next section for
     information.

  +o  external - as above.

  +o  monitorcap - describes the capabilities for multisyncs. DON'T use
     with a fixed sync monitor!


  55..33..  UUssiinngg tthhee iinntteerrnnaall ssuubbooppttiioonn oonn AAttaarrii mm6688kk ppllaattffoorrmmss


  Syntax: internal:(xres);(yres)[;(xres_max);(yres_max);(offset)]

  This option specifies the capabilities of some extended internal video
  hardware, i.e OverScan modes. (xres) and (yres) gives the extended
  dimensions of the screen.

  If your OverScan mode needs a black border, you'll need to write the
  last three arguments of the internal: suboption. (xres_max) is the
  maximum line length that the hardware allows, (yres_max) is the
  maximum number of lines, and (offset) is the offset of the visible
  part of the screen memory to its physical start, in bytes.

  Often extended internal video hardware has to be activated, for this
  you will need the "switches=*" options. [Note: Author would like extra
  information on this, please. The m68k documentation in the kernel
  isn't clear enough on this point, and he doesn't have an Atari!
  Examples would be helpful too]


  55..44..  UUssiinngg tthhee eexxtteerrnnaall ssuubbooppttiioonn oonn AAttaarrii mm6688kk ppllaattffoorrmmss


  Syntax:
  external:(xres);(yres);(depth);(org);(scrmem)[;(scrlen)[;(vgabase)[;(colw)[;(coltype)[;(xres_virtual)]]]]]

  This is quite complicated, so this document will attempt to explain as
  clearly as possible, but the Author would appreciate if someone would
  give this a look over and see that he hasn't fscked something up! :o)

  This suboption specifies that you have an external video hardware
  (most likely a graphic board), and how to use it with Linux. The
  kernel is basically limited to what it knows of the internal video
  hardware, so you have to supply the parameters it needs in order to be
  able to use external video hardware. There are two limitations; you
  must switch to that mode before booting, and once booted, you can't
  change modes.

  The first three parameters are obvious; gives the dimensions of the
  screen as pixel height, width and depth. The depth supplied should be
  the number of colours is 2^n that of the number of planes required.
  For example, if you desire to use a 256 colour display, then you need
  to give 8 as the depth. This depends on the external graphic hardware,
  though so you will be limited by what the hardware can do.

  Following from this, you also need to tell the kernel how the video
  memory is organised - supply a letter as the (org) parameter


  +o  n - use normal planes, i.e one whole plane after another


  +o  i - use interleaved planes, i.e. 16 bits of the first plane, then
     the 16 bits of the next plane and so on. Only built-in Atari video
     modes uses this - and there are no graphic card that supports this
     mode.

  +o  p - use packed pixels, i.e consecutive bits stands for all planes
     for a pixel. This is the most common mode for 256 colour displays
     on graphic cards.

  +o  t - use true colour, i.e this is actually packed pixels, but does
     not require a colour lookup table like what other packed pixel
     modes uses. These modes are normally 24 bit displays - gives you
     16.8 million colours.

  _H_o_w_e_v_e_r, for monochrome modes, the (org) parameter has a different
  meaning


  +o  n - use normal colours, i.e 0=white, 1=black

  +o  i - use inverted colours, i.e. 0=black, 1=white

  The next important item about the video hardware is the base address
  of the video memory. That is given by the (scrmem) parameter as a
  hexadecimal number with an 0x prefix. You will need to find this out
  from the documentation that comes with your external video hardware.

  The next paramter (scrlen) tells the kernel about the size of the
  video memory. If it's missing, this is calculated from the (xres),
  (yres) and (depth) parameters. It's not useful to write a value here
  these days anyway. To leave this empty, give two consecutive
  semicolons if you need to give the (vgabase) parameter, otherwise,
  just leave it.

  The (vgabase) parameter is optional. If it isn't given, the kernel
  can't read/write any colour registers of the video hardware, and thus
  you have to set up the appropriate colours before you boot Linux. But
  if your card is VGA compatible, you can give it the address where it
  can locate the VGA register set so it can change the colour lookup
  tables. This information can be found in your external video hardware
  documentation. To make this _c_l_e_a_r, (vgabase) is the _b_a_s_e address, i.e
  a 4k aligned address. For reading/writing the colour registers, the
  kernel uses the address range between (vgabase) + 0x3c7 and (vgabase)
  + 0x3c9. This parameter is given in hexadecimal and must have a 0x
  prefix, just like (scrmem).

  (colw) is only meaningful, if the (vgabase) parameter is specified. It
  tells the kernel how wide each of the colour register is, i.e the
  number of bits per single colour (red/green/blue). Default is usually
  6 bits, but it is also common to specify 8 bits.

  (coltype) is used with the (vgabase) parameter, it tells the kernel
  about the colour register model of your graphic board. Currently the
  types supported are vga and mv300. vga is the default.

  (xres_virtual) is only required for the ProMST/ET4000 cards where the
  physical linelength differs from the visible length. With ProMST, you
  need to supply 2048, whilst for ET4000, it depends on the
  initialisation of the video board.


  66..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn AAmmiiggaa mm6688kk ppllaattffoorrmmss




  This section describes the options for Amigas, which are quite
  similiar to that for the Atari m68k platforms.


  66..11..  WWhhaatt mmooddeess aarree aavvaaiillaabbllee ffoorr AAmmiiggaa mm6688kk ppllaattffoorrmmss??


  This depends on the chipset used in the Amiga. There are three main
  ones; OCS, ECS and AGA which all uses the colour frame buffer device.


  +o  NTSC modes

  +o  ntsc - 640x200

  +o  ntsc-lace - 640x400

  +o  PAL modes

  +o  pal - 640x256

  +o  pal-lace - 640x512

  +o  ECS modes - 2 bit colours on ECS, 8 bit colours on AGA chipsets
     only.

  +o  multiscan - 640x480

  +o  multiscan-lace - 640x960

  +o  euro36 - 640x200

  +o  euro36-lace - 640x400

  +o  euro72 - 640x400

  +o  euro72-lace - 640x800

  +o  super72 - 800x300

  +o  super72-lace - 800x600

  +o  dblntsc - 640x200

  +o  dblpal - 640x256

  +o  dblntsc-ff - 640x400

  +o  dblntsc-lace - 640x800

  +o  dblpal-ff - 640x512

  +o  dblpal-lace - 640x1024

  +o  VGA modes - 2 bit colours on ECS, 8 bit colours on AGA chipsets
     only.

  +o  vga - 640x480

  +o  vga70 - 640x400


  66..22..  AAddddiittiioonnaall ssuubbooppttiioonnss oonn AAmmiiggaa mm6688kk ppllaattffoorrmmss



  These are similar to the Atari m68k suboptions. They are:


  +o  depth - specifies the pixel bit depth.

  +o  inverse - does the same thing as the Atari suboption.

  +o  font - does the same thing as the Atari suboption, although the
     PEARL8x8 font is used instead of VGA8x8 font, if the display size
     is less than 400 pixel wide.

  +o  monitorcap - specifies the capabilities of the multisync monitor.
     Do not use with fixed sync monitors.


  66..33..  SSuuppppoorrtteedd AAmmiiggaa ggrraapphhiicc eexxppaannssiioonn bbooaarrddss



  +o  Phase5 CyberVision 64 (S3 Trio64 chipset)

  +o  Phase5 CyverVision 64-3D (S3 ViRGE chipset)

  +o  MacroSystems RetinaZ3 (NCR 77C32BLT chipset)

  +o  Helfrich Piccolo, SD64, GVP ECS Spectrum, Village Tronic Picasso
     IIII+ and IV/ (Cirrus Logic GD542x/543x)


  77..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn MMaacciinnttoosshh mm6688kk ppllaattffoorrmmss


  Currently, the framebuffer device implemented only supports the mode
  selected in MacOS before booting into Linux, also supports 1, 2, 4 and
  8 bit colours modes.

  Framebuffer suboptions are selected using the following syntax


  video=macfb:<font>:<inverse>



  You can select fonts such as VGA8x8, VGA8x16 and 6x11 etc. The inverse
  option allows you to use reverse video.


  88..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn PPoowweerrPPCC ppllaattffoorrmmss


  The author would love to receive information on the use of
  framebuffers on this platform.


  99..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn AAllpphhaa ppllaattffoorrmmss

  99..11..  WWhhaatt mmooddeess aarree aavvaaiillaabbllee ttoo mmee??



  So far, there is only the TGA PCI card - which only does 80x30 with a
  resolution of 640x480 at either 8 bits or 24/32 bits.




  99..22..  WWhhiicchh ggrraapphhiicc ccaarrddss ccaann wwoorrkk wwiitthh tthhee ffrraammbbuuffffeerr ddeevviiccee??


  This lists all the graphic cards that are known to work:


  +o  DEC TGA PCI (DEC21030) - 640x480 @ 8 bit or 24/32 bit versions


  1100..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn SSPPAARRCC ppllaattffoorrmmss

  1100..11..  WWhhiicchh ggrraapphhiicc ccaarrddss ccaann wwoorrkk wwiitthh tthhee ffrraammeebbuuffffeerr ddeevviiccee??

  This lists all the graphic cards available:


  +o  MG1/MG2 - SBus or integrated on Sun3 - max. 1600x1280 @ mono
     (BWtwo)

  +o  CGthree - Similar to MG1/MG2 but supports colour - max resolution ?

  +o  GX - SBus - max. 1152x900 @ 8bit (CGsix)

  +o  TurboGX - SBus - max. 1152x900 @ 8 bit (CGsix)

  +o  SX - SS10/SS20 only - max. 1280x1024 @ 24 bit - (CGfourteen)

  +o  ZX(TZX) - SBus - accelerated 24bit 3D card - max resolution ?
     (Leo)

  +o  TCX - AFX - for Sparc 4 only - max. 1280x1024 @ 8bit

  +o  TCX(S24) - AFX - for Sparc 5 only - max. 1152x900 @ 24bit

  +o  Creator - SBus - max. 1280x1024 @ 24bit (FFB)

  +o  Creator3D - SBus - max. 1920x1200 @ 24bit (FFB)

  +o  ATI Mach64 - accelerated 8/24bit for Sparc64 PCI only

  There is the option to use the PROM to output characters to the
  display or to a serial console.

  Also, have a look at the Sparc Frame Buffer FAQ at

  http://c3-a.snvl1.sfba.home.com/Framebuffer.html




  1100..22..  CCoonnffiigguurriinngg tthhee ffrraammeebbuuffffeerr ddeevviicceess



  During make config, you need to choose whether to compile promcon
  and/or fbcon. You can select both, but if you do this, you will need
  to set the kernel flags to select the device. fbcon always takes
  precedence if not set. If promcon is not selected in, on boot up, it
  defaults to dummycon. If promcon is selected, it will use this device.
  Once the buses are booted, and fbcon is compiled in, the kernel probes
  for the above framebuffers and will use fbcon. If there is no
  framebuffer devices, it will default to promcon.

  Here are the kernel options


  video=sbus:options
          where options is a comma separated list:
                  nomargins       sets margins to 0,0
                  margins=12x24   sets margins to 12,24 (default is computed
  from resolution)
                  off             don't probe for any SBus/UPA framebuffers
                  font=SUN12x22   use a specific font



  So for example, booting with

   video=sbus:nomargins,font=SUN12x22


  96x40, looks similar to a Solaris console but with colours and virtual
  terminals just like on the Intel platform.

  If you want to use the SUN12x22 font, you need to enable it during
  make config (disable the fontwidth != 8 option). The accelerated
  framebuffers can support any font width between 1 to 16 pixels, whilst
  dumb frame buffers only supports 4, 8, 12 and 16 pixel font widths.

  It is recommended that you grab a recent consoletools packages.


  1111..  UUssiinngg ffrraammeebbuuffffeerr ddeevviicceess oonn MMIIPPSS ppllaattffoorrmmss



  There is no need to change anything for this platform, this is all
  handled for you automatically. Indys in particular are hardwired to
  use a console size of 160x64. However, moves are afoot to rewrite the
  console code for these Indys, so keep an eye on this section.


  1122..  UUssiinngg KKGGIICCOONN ffrraammeebbuuffffeerr ddrriivveerrss

  1122..11..  WWhhaatt iiss KKGGIICCOONN??


  KGICON is a way of using KGI video drivers as framebuffer drivers.
  These drivers acts as a bridge layer between the fbcon and the KGI
  interfaces KGI is an acronynm for Kernel Graphics Interface, and it is
  part of the CGI project (http://www.ggi-project.org), which is
  concerned with producing a video card driver API and a set of video
  drivers to be used with Linux.  Conceptually, KGI is much like the
  framebuffer device drivers, but there are important differences - one
  difference in particular is that standard framebuffer console drivers
  are monolithic (all the code is contained in a single module), whilst
  KGI drivers are modular - each driver is composed of five subsections
  that are selected separately, and linked together at compile-time.
  These subsections are:


  +o  Chipset driver - much like the usual framebuffer console driver,
     which are usually designed around a particular video chipset.

  +o  Clockchip driver - controls the calculation of mode timings, and
     means that the same chipset can be used with a different clockchip
     on certain video cards.

  +o  RAMDAC driver - controls the digital to analogue conversion
     circuitry on video cards. As with clockchips, the same chipset may
     be used with different RAMDACs.

  +o  Acceleration driver - this controls accelerated drawing commands,
     anything from drawing lines, block copying to more complex 3D
     acceleration.

  +o  Monitor driver - consists of mode timing constraints which prevents
     the other subsections (usually the clockchip) from generating mode
     timings which cannot be handled by the monitor.

  This modular architecture allows driver code to be reuseable, and also
  enforces a clean separation of functionality and implements a
  consistent code layout.


  1122..22..  WWhhaatt ccaann KKGGIICCOONN ddoo tthhaatt vveessaaffbb ccaannnnoott??



  +o  Works on older hardware that does't support VESA 2.0.

  +o  Use non-fixed modes.  vesafb, by its nature, is limited to VESA 2.0
     standard resolutions such as 640x480, 800x600, etc.  KGICON can set
     any mode the video hardware is capable of, such as 1024x480,
     896x762, or whatever you want.  The only restriction is that the
     mode must have a legal mode timing for the video hardware that is
     being used and the mode cannot be outside of the monitor driver's
     capabilities.

  +o  Adjust mode timings.  vesafb can only use the mode timings which
     have been preset in the VESA BIOS. KGICON drivers can set any mode
     timings that are legal for the hardware and the monitor.


  1122..33..  WWhhaatt ccaann vveessaaffbb ddoo tthhaatt KKGGIICCOONN ccaannnnoott??



  +o  Be compiled into the kernel and used at boot time. KGICON drivers
     only works as modules right now.  Sorry, no penguin logo here.

  +o  Work on VESA 2.0 compliant hardware for which a KGI driver has not
     been written. The NeoMagic chipsets which are used in many
     notebooks have no KGI driver since the hardware specs are under
     NDA, but since they are VESA 2.0 compliant they can be used with
     vesafb.

  +o  Be smaller and stabler.  vesafb does not have to deal with a lot of
     the complexities that a true video driver does.  This greatly
     reduces the amount of code needed, as well as removing many
     opportunities for bugs to arise.


  1122..44..  WWhhiicchh hhaarrddwwaarree ddooeess KKGGIICCOONN ssuuppppoorrtt??


  Chipsets supported

  +o  Chips & Technologies 655xx

  +o  Cirrus Logic 542x and 546x

  +o  Cyrix MediaGX

  +o  Hercules monochrome

  +o  IBM VGA

  +o  Matrox Millennium I and II and Mystique

  +o  S3 928, 96x, 765 (Trio64V+) and 325 (ViRGE)

  +o  Tseng ET4000 and ET6000

  +o  Western Digital PVGA, wd90c00, wd90c1x, wd90c2x, and wd90c3x

  Most clockchips and RAMDACs that were/are used with any of the
  supported chipsets are also supported.  Acceleration support varies
  widely from card to card, but there is a generic software acceleration
  driver which can be used if a hardware acceleration driver is not
  present.  The acceleration driver is not currently used by KGICON
  anyway, as its functionality does not fit into the fbcon API very
  well.

  Monitor drivers are divided into three categories: monosync, multisync
  and timelist.

  Monosync monitors can only use a fixed set of one or more mode
  timings.  There are three monosync drivers available: MDA (old
  monochrome drivers), VGA and SVGA.  Each allows only the timings which
  are standard for the given hardware.  SVGA standard timings are the
  VESA standard timings.

  Multisync monitors can use any mode timings within specified ranges.
  Each driver corresponds to a particular make and model of monitor,
  since they all have their own unique timing constraints.  A database
  of monitor names and their timing specs is used.

  Timelist monitors are similar to multisync monitors, but they have
  sets of allowable timing ranges rather than one continuous range.
  Some older monitors have this property.


  1122..55..  WWhheerree ccaann II ggeett KKGGIICCOONN ffrroomm??


  KGICON can be obtained from the main GGI CVS source tree.  If you know
  what CVS is, you can use it to fetch the tree ananymously from the GGI
  project's CVS server.  If not, the whole tree is downloadable as one
  .tar.gz archive from the GGI web page at http://www.ggi-project.org.
  Yes, you currently have to download the whole source tree, which
  contains a lot of other stuff besides KGICON. Sorry for the
  inconvenience. If enough people complain, we will try to arrange for
  KGICON to be available separately.


  1122..66..  HHooww ddoo II iinnssttaallll KKGGIICCOONN??


  Untar the archive, and you should have a directory named 'degas'. Each
  release of the CGI project are named after famous painters. The last
  release of CGI was 'dali'.

  Anyway, if you look in degas/, you will see a directory named
  'kgicon'.  Change to that directory and type 'su -c "make install"'.
  This will create a symlink to the kgicon/include/ directory from
  /usr/src/linux/include/kgi.  If you do not have your kernel sources in
  /usr/src/linux, you will need to make the symlink by hand.

  After you have done this, change directories to kgicon/kgi and do a
  make.  A dialog-based config menu will pop up (yes, dialog is
  required) which will lead you through the process of selecting the
  various subsection drivers.  After this is done and you exit from the
  config system, dependencies will be built and then the driver itself
  will be compiled.

  Assuming that all goes well, you should end up with a file called
  kgicon.o in the kgicon/kgi/ directory.  This is your KGICON video
  driver.  Go on to the next step.  If all did NOT go well (i.e. the
  compile died), retrace your steps and review these instructions to
  make sure you didn't screw something up somewhere along the line.  If
  you still have problems, make notes on what config you are trying to
  use and what errors you are getting (straight error logs are
  preferred) and post an error report to the GGI mailing list.

  Next, you want to sync your disks to minimize filesystem corruption in
  case some bug surfaces and locks your system up.  Of course we hope
  this won't occur and it isn't really all that likely, but video cards
  tend to be particularly nasty about being misprogrammed and it is
  better to be safe than sorry until you have gotten KGICON to work
  reliably for you.

  KGICON comes with an 'insert' shell script in the kgi/ directory that
  will insmod the driver module and call the con2fbmap utility to remap
  your virtual consoles from the old driver (usually vgacon) to the new
  KGICON driver.  By default, all VCs are remapped.  If all goes well,
  you should see your screen flicker and probably change somewhat in
  appearance and font style.  Congratulations, you are up and running
  with KGICON!  Look elsewhere in this FAQ for ways to play with your
  new toy.

  If you are experiencing problems, make sure it doesn't work right with
  the standard 'insert' script.  Don't try anything wierd until you have
  the insertion working.  If you still have problems, see if you can
  reboot.  If you can, your syslogs may give some clue as to what went
  wrong.  If everything is frozen, just reset.  In either case, collect
  all the pertinent info and syslogs if you could get them and mail a
  bug report to the GGI mailing list.


  1122..77..  IIss KKGGIICCOONN ggooiinngg iinnttoo tthhee kkeerrnneell??


  Not before 2.3, and even then it is still up to Linus.  We wanted to
  try to get it in before 2.2, but it just wasn't ready in time.  The
  big issue was the lack of kernel makefile integration - the KGI
  makefile system is quite different from Linux's.  Work in underway to
  remedy this, but in the meantime it is necessary to use modules.


  1122..88..  IIss tthheerree ggooiinngg ttoo bbee KKGGIICCOONN ssuuppppoorrtt ffoorr nnoonn--IInntteell ppllaattffoorrmmss??


  The reason for this is that up until recently, KGI was explicitly
  x86-oriented.  Also, there are a LOT more x86 boxes running Linux out
  there than other types, and this was even more true a couple of years
  back when KGI development started.  It is what people had to work
  with.  Another factor is that until the m68k Linux ports were merged,
  the m68k guys had fbcon and didn't need KGI.

  Nevertheless, there is no reason why non-x86 KGI drivers cannot BE
  written.  There are a couple of developers working on PPC and Sparc
  KGI drivers right now.  But it is up to those with access to Linux
  running on non-x86 platforms to step up and volunteer to do the work.
  If you are interested, look on the GGI project website for more
  information or ask on the mailing list.




  1122..99..  CCoonnttaacctt IInnffoorrmmaattiioonn


  GGI project web page: http://www.ggi-project.org


  FAQ author: Jon Taylor taylorj@ecs.csus.edu


  GGI mailing list: ggi-develop@eskimo.com.  See the web page for
  subscription information and list archives.























































