|
Apologies for the shouting but this is important.
When answering a question please:
- Read the question carefully
- Understand that English isn't everyone's first language so be lenient of bad spelling and grammar
- If a question is poorly phrased then either ask for clarification, ignore it, or mark it down. Insults are not welcome
- If the question is inappropriate then click the 'vote to remove message' button
Insults, slap-downs and sarcasm aren't welcome. Let's work to help developers, not make them feel stupid.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
For those new to message boards please try to follow a few simple rules when posting your question.- Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
- Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
- Keep the subject line brief, but descriptive. eg "File Serialization problem"
- Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
- Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
- Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
- If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode "<" (and other HTML) characters when pasting" checkbox before pasting anything inside the PRE block, and make sure "Use HTML in this post" check box is checked.
- Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
- Please do not post links to your question into an unrelated forum such as the lounge. It will be deleted. Likewise, do not post the same question in more than one forum.
- Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
- If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
- No advertising or soliciting.
- We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
What happens if a driver developer sends a command to a sound board ( just a random pick) which the board doesn’t recognize/doesn’t know how to handle? Could that cause a crash of the sound board and require a restart?
If the data on the sound board gets corrupted could that make the entire OS unstable?
modified 4-Oct-24 16:21pm.
|
|
|
|
|
As always, it depends on the hardware. The response is going to be dictated by the chip the command was sent to, the code running on the chip, any error handling, or any command/response logic, ...
It could throw an invalid command message back to the driver, it could just ignore the command entirely, it could put the chip is a bad state, ...
If you're the one developing the hardware and driver, everything is up to you.
If you're NOT the one who developed the hardware, there's just too many factors you have no control over.
I seriously doubt there's going to be documentation on the hardware sufficient to tell you what will happen.
But the only way to tell is to try it!
|
|
|
|
|
I find that very interesting
modified 4-Oct-24 17:41pm.
|
|
|
|
|
That really goes for any software, doesn't it - driver or whatever?
You may consider a driver to be a process, like any other process in the system. All processes should be be prepared for arbitrary commands and parameters, and reject invalid ones in a 'soft' way - ignoring silently or explicitly rejecting. 'Invalid' includes 'Invalid in the current state'.
I am certainly not saying that all processes (including drivers) do handle all sorts of illegal commands and parameters, just that they should do, driver or otherwise. Regardless of their abstraction level.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Off topic electronics question: Digital circuit means that it is clock based, the circuit board functions at a certain clock frequency. A washing machine digital circuit board functions at a certain clock rate just like a PC motherboard. Is that true?
|
|
|
|
|
Digital circuits are not by definition clocked, if you by that mean that there is a central clock setting the speed of "everything". Circuits may be asynchronous, going into a new state whenever there is a change in the inputs to the circuit. Think of a simple adder: You change the value on one of its inputs, and as soon as the voltages have stabilized, the sum of the two inputs are available at the output. This happens as quickly as the transistors are able to do all the necessary switching on and off - the adder doesn't sit down waiting for some 'Now Thou Shalt Add' clock pulse.
You can put together smaller circuits into larger ones, with the smaller circuits interchanging signals at arbitrary times. Think of character based RS-232 ("COM port"): The line is completely idle between transmissions. When the sender wants to transfer a byte, he alerts the receiver with a 'start bit' (not carrying any data), at any time, independent of any clock ticks. This is to give the receiver some time to activate its circuits. After the start bit follows 8 data bits and a 'stop bit', to give the receiver time to handle the received byte before the next one comes in, with another start bit. The bits have a width (i.e. duration) given by the line speed, but not aligned with any external clock ticks.
Modules within a larger circuit, such as a complete CPU, may communicate partly or fully by similar asynchronous signals. In a modern CPU with caches, pipelining, lookahead of various kinds, ... not everything start immediately at the tick. Some circuits may have to wait e.g. until the value is delivered to them from cache or from a register: That will happen somewhat later within the clock cycle. For a store, the address calculation must report 'Ready for data!' before the register value can be put out. Sometimes, you may encounter circuits where the main clock is subdivided into smaller time units by a 'clock multiplier' (PCs usually have a multiplier that creates the main clock by subdividing pulses from a lower frequency crystal; the process can be repeated for even smaller units), but if you look inside a CPU, you should be prepared for a lot of signal lines not going by the central clock.
The great advantage of un-clocked logic is that it can work as fast the transistors are able to switch: No circuit makes a halt waiting for a clock tick telling it to go on - it goes on immediately.
The disadvantage is that unless you keep a very close eye on the switching speed of the transistors, you may run into synchronization problems: One circuit is waiting for a signal that 'never' arrives. It doesn't arrive on time. Or maybe it arrived far too early, when the receiver was not yet ready to handle it. So asynchronous, non-clocked logic is mostly used in very small realms of the complete circuit (but possibly in lots of realms).
For special purposes, you may build a circuit to do a complete task, all in asynchronous logic. If you are building something that is to interact with other electronics, you will almost always depend on clocking in order to synchronize interchange of signals. So all standard CPUs use clock signals for synchronizing both major internal components and with the outer world. Asynchronous operation is mostly limited to the between-ticks interchanges between the lower layer components.
I think you can assume that you are right about the washing machine circuit board: Almost for certain does it have a clock circuit (probably a simple RC oscillator - it doesn't need the precision and speed of a clock crystal). The major reason is for communicating with the surroundings in a timely (sic!) manner. Today, chances are very near to 100% that the machine uses an embedded processor for control. This will require a clock for its interface to the world, and most likely for keeping its internal modules in strict lockstep as well.
I would guess (without knowing specific examples) that in application areas such as process control, there is more asynchronous logic, both because the outer world (being controlled) goes at its own pace regardless of any clock, and you have to adapt to that - an external interrupt signal is by definition asynchronous. Also, in some environments, immediate reaction to special events is essential. The speed of asynchronous logic may provide a feedback signal noticeably faster than a clocked circuits that "all the time" must wait for ticks before it will go on.
A historical remark:
You explicitly said digital circuits. In the old days, we had analog computers, handling continuous values, not discrete neither in level nor time. You couldn't do text editing on such a computer, but if you had developed a set of differential equations for, say, controlling a chemical process, you could program this equation system by plugging together Lego-brick-like modules for integration, derivation, summation, amplifying or damping etc. in a pattern directly reflecting the elements of the differential equations. This setup was completely un-clocked; each "brick" reacted immediately to changes of its inputs. Basic calculus functions was a direct physical consequence of how the brick was composed of capacitors, resistors, maybe transistors and other analog components. One of the best known usage examples at my Alma Mater was a professor running simulator for cod farming in a fjord: He had a number of potentiometers to adjust the amount of fodder given to the fish, the water temperature, the amount of fresh water running down to the fjord when snow melted in spring, and so on. (Almost) immediately could he read the effect on the cod population on the analog meters connected to the outputs.
I never got to try to program an analog computer (I was maybe 3-5 years late, no more) but I had a relative (retired years ago) whose special expertise was in analog computers. He shook his head when digital computers pushed out the last analog ones, around 1980: It will take ages before digital computers can do the job of analog ones. They are not by far fast enough. Besides, if you need to integrate a signal, plugging in an integrator is straightforward. Writing 100 lines of code to do it digitally is not, is error prone and far abstracted from the real world. Even though changes happen simultaneously in twelve different "bricks", with immediate interactions, you have to do them sequentially one by one on an digital computer, and the mutual interactions never comes naturally; you have to devise separate communication channels to exchange them ...
He was forced to switch to digital computers for the second half of his professional life, but he never made friends with them.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Does a clocked digital circuit board have “cells” resembling CPU registers, something that plays as persistent memory between flashes/waves of current. I’m just trying to visualize how a board you plug into a motherboard works. In the short blackout between cycles information has to be kept somewhere.
|
|
|
|
|
Register-like mechanisms are present in almost all kinds of circuit boards, especially when dealing with interface to external units. I don't think 'cells' is a common term; they are more commonly called registers, occasionally 'latches' (for registers catching input values).
Registers on peripheral interfaces are used for preserving internal values as well, not just values input or output. Quite often, they are addressed in similar ways. With memory mapped IO (which is quite common in modern machines, except for the X86/X64), one address is a straight variable, another address sets or returns control bits or data values in the physical interface circuitry, a third address is a reference to the internal status word of the driver. So which is a 'register', which is a 'variable', which one is actually latching an input or output value? The borders are blurry. Whether you call it a register, a variable, a latch or something else: It is there to preserve value, usually for an extended period of time (when seen in relationship to clock cycles).
When some interface card is assigned 8 word locations (i.e. memory addresses) in the I/O space of a memory mapped machine, don't be surprised to see them referred to as 'registers', although you see them as if the were variable locations. When you address the variables / registers in you program, the address (or lower part of it) is sent to the interface logic, which may interpret it in any way it finds useful. Maybe writing to it will save the data in a buffer on the interface. Maybe it will go to line control circuitry to set physical control lines coming out of the interface. There is no absolute standard; it is all up to the interface.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I’m learning a lot from your posts tronderen. Thank you for taking the time to write down thorough explanations. I bet there are other newbies around too who find them useful
|
|
|
|
|
I have a question about PC motherboards. What is the purpose of all the chips on a motherboard. Some of them help various slots to function but I’m wondering if there is a hierarchy among them. Is there a top motherboard chip that governs the communication between the CPU and everything else found on the motherboard ( slots, drives, ports etc)?
How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? If I want to make my own OS how do I talk with the Hard Drive from Assembly?
modified 4 days ago.
|
|
|
|
|
There is almost as many answers to this question as there are motherboards Or at least as there are CPU chip generations.
Over the years, things have changed dramatically. In very old days, I/O signals could come directly from, or go directly to, the CPU pins. Extra components on the board were essentially for adapting the physical signals - RS-232 signals could go up to +/- 25V, which won't make the CPU chip happy. Gradually, we got support chips that e.g. took care of the entire RS-232 protocol, with timings, bit rate, start/stop bits etc. handled by a separate chip. Similar for the 'LPT' (Line PrintTer) parallel port; it was handled by a separate chip. The CPU usually had a single interrupt line - or possibly two, one non-maskable and one maskable. Soon you would have several interrupt sources, and another MB chip was added: An interrupt controller, with several input lines and internal logic for multiplexing and prioritizing them. Another chip might be a clock circuit. DMA used to be a separate chip. For the 286 CPU, floating point math required a supporting chip (287). Other chips had the memory management (paging, segment handling etc.) done by a separate chip: Adding the MMS chip to the MC68000 (1979) gave it virtual memory capability comparable to the 386 (1985). Not until 68030 (1987) was the MMS logic moved onto the main CPU chip.
There were some widespread "standard" support chips for basic things like clocking, interrupts and other basic functions. These were referred to as the chipset for the CPU. We still have that, but nowadays technology allows us to put all the old support functions and then some (quite a few!) onto a single support chip, of the same size and complexity as the CPU itself. Even though it is a single chip, we still call it a 'chipset'. Also, a number of functions essential to the CPU, such as clocking, memory management, cache (which started out being separate chips) were moved onto the CPU chip rather than being included in 'the chipset chip'.
You can view the chipset as an extension of the CPU. You may call it the 'top level' MB chip, if you like. In principle, it could have been merged into the main CPU, but for a couple of reasons, it is not: First, it acts as a (de)multiplexer for numerous I/O-devices, each requiring a number of lines / pins. The CPU already has a huge number of pins (rarely under a thousand on modern CPUs, it can be up to two thousand). The CPU speaks to the chipset over an (extremely) fast connection, where it can send/receive data to/from all I/O devices over one set of lines, leaving to the chipset to distribute/collect data to/from the devices.
A second reason: For a given CPU, there may be a selection of alternate chipsets, varying in e.g. types and numbers of I/O devices supported. For a low-end PC the designer would choose a simple chipset with fewer I/O lines and smaller chip size, while a top range PC would come with a chipset to handle lots of high speed lines and other stuff. Also, a CPU model usually is one in a series, all having the same 1000+ pins, but varying in clock speed, cache size etc. They can all use the same chipset, and the CPU designer can forget about the I/O details, he doesn't have to make 'n' different adaptations of the I/O circuitry, one for each model in the series.
Newer generations of CPUs may differ so much in their communication with the chipset that they requires another chipset. Usually, if two CPUs use the same socket, they have compatible chipset interfaces. Since the chipset largely defines which interfaces the MB has, it also defines which copper lines the MB musty provide; a given MB is closely tied to a given chipset. So on all PC MBs (at least those I have seen), the chipset is soldered onto the MB; it doesn't sit in a socket like the CPU usually does.
Even though very much of the I/O logic sits in the chipset, electrical properties may need to be handled by external circuitry, which is a significant part of the smaller components you will see on the MB. Some interfaces have such critical timing requirements that it must take place right at the external connection. Some protocol aspects, such as forming a physical PCI-E frame, doing bit stuffing and frame filling, and similar physical-level protocol details, are done right at the connector.
The vast majority of the mainboard components are various adaptations to an external interfaces - I cannot think of any sort of non-I/O logic function typically found on a mainboard outside the chip set, assuming that you count e.g. reading a SIM card or external TPM chip as I/O. You will also see components related to power supply and fan control, but I do not think of that as 'logic'.
On Intel chips, the interconnection between the CPU and the chipset is called FSB, Front Side Bus. The generic name for the Intel chipsets is Wikipedia: Northbridge[^]. Earlier, the chipset was spread onto two chips (so it was a set!): The Northbridge passed I/O to low-speed devices on to the Southbridge[^]. As you can see from the figure in these two articles, Southbridge was a subordinate of the Northbridge, and any adaptation logic beyond this is a subordinate to either Northbridge (high speed) or Southbridge (low speed). Today, the North- and Southbridge is merged into a single chip, but the Northbridge name seems to stick.
The two Wikipedia articles are worth reading for more detail, but note that some of the information is quite old, e.g. referring to Pentium 4 and iMac G5, 20 years old. Wikipedia: Chipset[^] is a more general article - but even less up-to-date than the two others.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I’ve had a good time reading that
|
|
|
|
|
Calin Negru wrote: How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? You most definitely go via the motherboard! The OS talks to the chipset, which talks to some I/O-bus - for a disk, it is typically SATA, USB or PCI-e. These present an abstracted view of the disk, making all disks look the same, except for obvious differences such as capacity. At the other (i.e. the disk) end is another piece of logic, mapping the abstract disk onto a concrete, specific one (i.e. mapping a logical address to a surface number, track number, sector number), usually handling a disk cache. In modern disks, the tasks are so complex that they certainly require an embedded processor.
USB and PCI-e are both general protocols for various devices, not just disks. Sending a read request and receiving the data, or sending a write request and receiving an OK confirmation is very much like sending a message on the internet: The software prepares a message header and message body according to specified standards, and pass it to the electronics. The response (status, possibly with data read) is received similar to an internet message.
Writing a disk block to an USB disk (regardless of the disk type - spinning, USB stick, flash disk, ...) or writing a document to an USB printer is done in similar ways, although the header fields may vary (but standards such as USB define message layouts for a lot of device classes so that all devices of a certain class use similar fields).
All protocols are defined as a set of layers: The physical layer is always electronics - things like signal levels, bit rates etc. The bit stream is split into blocks, 'frames', with well defined markers, maybe length fields and maybe checksum for each frame (that varies with protocol); that is the link layer. There may be a processor doing this work, but externally, it appears as if it is hardware. Then, at the network layer, the data field in the frame is filled in with an address at the top, and usually a number of other management fields. For this, some sort of programmed logic (an embedded processor, or dedicated interface logic) is doing the job - but we are still outside the CPU/chipset. The chipset feeds the information to the interface logic, but doesn't address the USB or PCI-e frame as such, or the packet created (within the link frame) by the network layer. Both USB and PCI-e defines the network level, identical for all transfers, regardless of type.
Now we come to the contents of the packet, the data field following the packet header. This is where you will first see soft software - the network layer is frequently realized in firmware. Here, you construct the transport level message that varies in layout from one device class to another. If you want the disk controller in the other end of the USB cable to understand your commands, you must build a message according to the standard for USB Mass Storage Controller. Note that the standards define the semantics of the fields of the packet, not how you construct them - that is up to you.
Also, neither USB nor PCI-e prescribe (by themselves) how you transfer your request from your software to the interface, but there are other standards (note the plural 's'!) for that; it usually involves writing and reading control registers in the interface. If your machine provides memory mapped I/O, it appears to you as writing or reading memory locations. Otherwise, you will use special I/O instructions, where the interface register is identified in the instruction. For large data blocks (e.g. a disk page) you will not send it byte by byte in a long series of instructions; you will operate it like a DMA: You provide the starting address in RAM and the length, telling the interface to go there to fetch (or deliver) the data.
You are still at the level where you transfer raw data blocks in memory to or from raw disk addresses - you have nothing like a file system. If you want to read e.g. an USB memory stick, you must understand how disk blocks are being used to create an exFAT or FAT32 file system (used on the great majority of USB sticks - although not all of them). If you really are going to build your own OS, you will have to write and interpret those exFAT structures in your own code. You may pick up some driver library written by others, but those drivers typically expect to call various low level OS functions for basic operations; your OS is forced to fulfill these expectations. The drivers you pick up usually have a call interface according to an OS standard, so you have to adhere to this standard as well.
Handling a file system is more than interpreting the directory structures. It is buffering, handling interrupts, enforcing protection and numerous other tasks. Some of it (e.g. user based protection) is defined by the file system standard, so you are bound to create mechanisms for handling users, with authentication and authorization ... Pretty soon you realize that you are in the process of making your own implementation of an already existing OS
A PC (or other type of computer, except for an embedded processor with all code in flash/ROM) must be able to boot up the machine from a mass storage device. This doesn't imply that it knows all the different files system structures: There are only a small handful different ways of organizing the first blocks on a disk - BIOS defines one format, UEFI another, regardless of OS. The BIOS/UEFI loads into RAM the boot sector of the disk, and transfers control to this code, without worrying about neither file system or OS. The boot sector code knows the file system it was fetched from, and will know where to find more file system/OS specific code on the disk that will load the entire OS. None of this is done by the BIOS/UEFI itself, so you cannot expect to "borrow"(/steal) file system access code from the BIOS.
The BIOS/UEFI doesn't have to know the file system catalogs etc. Today, they often do, to a certain degree, usually for providing non-essential function in its own user interface (e.g. loading background images). If you come with a disk with a different file system, these functions will not be available, even though the BIOS/UEFI may be able to boot an OS from the disk. If the disk has user files only (no OS), maybe your installed OS has functions for interpreting a foreign file system, but that requires the drivers from this OS, which are not available if you make your own OS.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers?
Is a machine instruction a 32 bit word/variable?
And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?
modified 2 days ago.
|
|
|
|
|
|
In principle, a driver is very much like another method / function / routine or whatever you call it. It may be written in any language. For all practical purposes, that is any compiled language - for performance/size reasons, interpreted languages are obviously not suited.
There is one requirement for bottom level drivers: The language must, one way or the other, allow you to access interface registers, hardware status indicators etc. If you program in assembler (machine code), you have all facilities right at hand. In the old days, all drivers were written in assembler, without the ease of programming provided by medium/high level languages for data structured, flow control etc.
So from the 1970s, medium level languages came into use, providing data and flow structures, plus mechanisms for accessing hardware - e.g. by allowing 'inline assembly': Most commonly, a special marker at the start of a source code line told that this line is not a high level statement but an assembly instruction. Usually, variable names in the high level code is available as defined symbols for the assembler instructions, but you must know how to address it (e.g. on the stack, as a static location etc.) in machine code.
The transition to medium/high level languages started in the late 1970 / early 1980s. Yet, for many architectures / OSes, with all the old drivers written in assembler, it was often difficult to introduce medium/high level languages for new drivers. Maybe there wasn't even a suitable language available for the given architecture/OS, of which there was plenty in those days. So for established environments, assembler prevailed for many years. I guess that some drivers are written in assembler even today.
If the language doesn't provide inline assembler or equivalent, you may write a tiny function in assembler to be called from a high level language. Maybe the function body is a single instruction, but the 'red tape' for handling the call procedures make up dozen instructions. So this is not a very efficient solution, but maybe the only one available. Some compilers provide 'intrinsics': Those are function-looking statements in the high level language, but the compiler knows them and does not generate a function call, but a single machine instruction (or possibly a small handful) right in the instruction flow generated from the surrounding code. E.g. in the MS C++ compiler for ARM, you can generate the vector/array instructions of the CPU by 'calling' an intrinsic with the name of the instruction. (It does not generate a call sequence!). If the compiler provide a large set of intrinsics, you may be able to do a lot of driver tasks without having to go to assembler, but probably not all of them. A handful of inline assembler instructions must be expected, for directly accessing hardware registers.
So I guess that for all new projects after around 1980, the majority of drivers were written in a medium/high level language. The best known example is C (let us postpone the discussion whether original C is a 'high level language' - some people call it 'machine independent assembler'), but there are several other examples, usually proprietary.
For output, you could in principle (and in DOS or other primitive OSes) write the driver as an ordinary user function, directly called from your application. In modern OSes/architectures, addressing the registers on the physical interface requires that you are executing in privileged mode, which your app is not. Interrupt handlers execute in privileged mode. So your application makes a programmed interrupt (a.k.a. supervisory call, monitor call, ...) and the driver is installed as an interrupt handler. A handler is really nothing special, it is just a method that is activated when a given interrupt occurs.
For input - and that includes return status after an output operation - the input part of the driver must also be installed as an interrupt handler, activated when the hardware input line generates an interrupt. The input driver code must read the status register on the interface to identify the source / interrupt reason, and when appropriate, fetch the input values from registers and store in ordinary RAM. Aside from the interface register access, the interrupt handler is very much like any other method. The way it is activated may differ from the way an application would call a method. Many medium/high level language compilers lets you prefix a method declaration with e.g. 'interrupt', and the compiler will generate the appropriate method entry code. Usually, all interrupt handlers have identical argument lists: The interrupt code, interrupt source, ... This is architecture dependent.
(You "can", and it wasn't uncommon in the DOS days, write an input driver without requiring interrupts: You could write a busy loop to poll the status register of a device continuously, and when a bit is set indicating 'input data available', you read it from the appropriate register. This keeps the CPU constantly busy, which is OK if it has nothing else to do, e.g. in an embedded system. For a machine with an ordinary user interface, it is less suitable.)
Drivers can be developed in any standard development environment. Dedicated driver development tools may of course provide a library of (quite primitive) functions, so that you have to program them yourself. Their main contribution is not for the writing of the driver code, but for preparing the installation of a driver. OSes (above the primitive level) are very restrictive about installing privileged code, anything that runs as an interrupt handler. So to make sure that no malware drivers are installed, it may e.g. require that the code be signed by some trustworthy authority that guarantees that the code will not abuse its privileges as a privileged interrupt handler.
Note that 'driver' is far from a well defined term! Much of the above applies primarily to the very low level drivers, those directly addressing the hardware. In principle, the driver could hand over every byte or data block individually to the application. They rather hand it over to a higher level driver that e.g. interprets the data block as a file system fragment index, and makes a new request to the disk driver to fetch one specific block of that fragment. When it arrives, it is handed over to a driver on an even higher level. This driver interprets the block as directory entries, and so on, in multiple levels.
A driver architecture defines the order of the levels, or stated in another way: The order in which you do things. First, you collect the pieces that are to go into a disk block ("buffering"). Then you identify the right open file descriptor. Then you determine the block's location within the file. Then the location of the block on the disk. And so on. Some of it is obvious: You cannot determine the location on disk before you have determined the position in the file. For other functions, you may have a choice. E.g. after a read, how high up will you hand the block before checking the error detection/correction bits?
I saw a figure of a driver architecture of no less than 32(!) levels. For the very most cases, the majority of the levels were empty. But if you want to implement some functionality (say, disk level encryption), it has its place in one given level in the architectures: You can expect for outgoing data, all the operations defined at higher levels have been performed on the data, but none of the operations at lower level, and the other way around for incoming data. This holds regardless of which driver layers are defined (and how they are implemented) above and below the new functionality. Note that the order of operations is a question of design/choice; lots of it is not defined by any law of nature. Different OSes may have significantly different ways of ordering their driver stacks!
When a method in your application receives the file data block and interprets it as a set of data records, it really is a direct extension of the stack of driver levels. Somewhere in that stack you'll find the interrupt mechanism for going into privileged mode, and you could call that the borderline between 'driver levels' and non-driver code. For a given task, it isn't always obvious whether it should be taken care of by drivers or by higher levels. E.g. collecting a sequence of smaller output elements into a buffer, so it can be written to disk in a single operation, is done both at unprivileged level (e.g. in the C library) and in privileged file system code.
Higher level drivers, not accessing hardware directly, are today almost exclusively written in (medium/) high level languages. There is no need for assembler coding.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Calin Negru wrote: Is a machine instruction a 32 bit word/variable? This is something I have been fighting since my student days!
What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS!
RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers.
When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation.
A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations.
So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions.
Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text or number interpretation, and those intended for instruction interpretation. Say, a compiler that produces the instruction bit patterns: To the compiler, the bit patterns are data to be written to disk, just like another program writes text or numbers. The bit patterns are not actually interpreted according to the intention until they are loaded into a CPU. But all the time, claiming that they are something else, is wrong.
I am happy to have grown up with machines so simple that we could see from the instruction word: That bit activates the adder, that bit opens the gate to the destination register, that bit opens the the gate from memory to the program counter to make a jump to that address, ... This is to push it a little, but our lab when we were programming a 2901 bit slice processor (only the seniors know what '2901' and 'bit slice' was ) was very close to this.
Is a machine instruction a 32 bit word/variable? An instruction is an interpretation of a bit pattern. The pattern may be 32 bits, it may be 8, 16, or a variable number of bits, and the interpretation may reference other bit patterns as 'operands'. As you suggest: The bit pattern interpreted as an instruction is stored in the same way as other bit patterns that may have interpretations e.g. as variables.
The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. The CPU will interpret a bit pattern as an instructions, and progress to the sequentially following bit pattern and interpret it in a similar way - unless the semantics of the first instruction implied that the CPU should continue interpretation from another location in memory.
Hopefully, all the bit patterns the CPU tries to interpret as instructions are also intended as instructions. The same goes for all other RAM contents: Hopefully, everything intended as text is interpreted as text, everything intended to be an image is interpreted as an image. And nothing else.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Thank you guys. I’m going to stop bugging you with my questions, at least for now.
|
|
|
|
|
Calin Negru wrote: I’m going to stop bugging you with my questions, at least for now. Don't worry! It is nice having someone to ask questions so that the responder is forced to straighten out things in his head, in a way to make it understandable.
As long as you can handle somewhat lengthy answers, it is OK with me!
When you get around to ask questions about networking, there is a risk that I might provide even longer and a lot more emotional answers.
I am spending time nowadays to straighten out why the Intranet Protocol has p**ed me off for 30+ years! When I do that kind of things, I often do it in the form of a lecturer or presenter who tries to explain ideas or principles, and must answer to questions and objections from the the audience. So I must both get the ideas and principles right, and the objections and 'smart' questions. That is really stimulating - trying to understand the good arguments for why IP, say, was created the way it was. (It has been said that Albert Einstein, when as a university professor got into some discussion with graduate students, and of course won it, sometimes told the defeated student: OK, now you take my position to defend, and I will take yours! ... and again, Einstein won the discussion. If is isn't true, it sure is a good lie!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
>This is something I have been fighting
I know it’s an important problem. If you don’t understand that it’s like having a car which has doors that don’t close properly.
|
|
|
|
|
I believe way back when there was a graphics card for either the Apple II or IBM-PC, before VGA, that if used incorrectly would destroy the CRT it was hooked to.
|
|
|
|
|
There were video cards a long time ago - that is 40 to 50 years - where you did not select the scan rate from a table of predefined horizontal and vertical frequencies, but you set the frequencies by absolute values. If you set one of them to zero (usually by forgetting to set it, or setting it failed for some reason and you didn't handle it immediately), the electron beam wouldn't be moving at all but remain steady in one position. If you set one but not the other, the electron beam would continuously redraw a single scan line or column. The phosphor on the screen surface is not prepared for being continually exited at full strength, and it would "burn out", loose its capacity to shine. So you would have a black spot, or a black horizontal or vertical line across the screen. This damage was permanent and could not be repaired.
This is so long ago that I never worked with, or even saw, a video card that allowed me to do this damage to my screen. I do not know what that standard (if it was a standard) was called. My first video cards were 640x480 VGA, heavily stressed in the marketing that you could not destroy your screen whatever you sent to it. So the memory of these 'dangerous' video cards where still vivid in the 1980s (but I do believe that VGA's predecessor, EGA, was also 'safe').
Related to this "burn out" danger, was the "burn in": Everyone who had a PC in the 1990s remember the great selection of screen savers back then: After a few minutes of idleness, a screen saver would kick in, taking control of the screen, usually with some dynamically changing, generated graphics. Some of them were great - I particularly remember the plumbing, where every now and then a teapot appeared in the joints: We could sit for minutes waiting to see if any teapot would pop up These screen savers did save your screen: If you left you PC and screen turned on when leaving your office (or for your home PC, when going to bed) with the background desktop shining, the bright picture elements in icons, status line or whatever, would slowly "consume" the phosphor in those spots. After some weeks or months, when you slide a white background window over those areas, you might see a shadow of your fixed location icons and other elements in what should have been a pure white surface. The image of the desktop was "burnt into" the screen.
Note that this has nothing to do with your display card, driver or other software: The signal to the screen is that of a pure white surface; the screen is incapable of displaying it as such. But replace your screen with another one, and the "ghost images" will not be there.
So screen savers were devised to change the image all the time, so that there wouldn't be a fixed image to burn into the screen. The phosphor wear is evenly distributed over the entire screen surface. Of course: A screen saver displaying an all black surface would have caused no wear anywhere, but for years, we had a screen saver craze where noone would admit to not admit to not having a fancy screen saver to prevent burn-ins! Today, the driver/video logic can even turn our screen off completely, but that is a fairly new thing that didn't exist or at least wasn't common in the 1990s.
Burn-in was a problem restricted to CRT displays; LCD screens are immune to burn-in - they do not use phosphor to generate light. (Well ... LEDs do, but in an LCD screen it is not pixel-by-pixel as in the old CRTs.) So even though some people still have screen savers running, they are not required for saving the screen.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Does bios function like an operating system? It captures keyboard input, has the ability to display characters on screen, on my laptop it even displays animations. All this is done without using drivers. What does the battery on the motherboard do? Does it help keep the bios always loaded in memory or is the bios booted from permanent memory when you turn on the PC?
Bios uses the processor to function hence it must be written in assembly, is that how it works?
modified 28-Sep-24 17:15pm.
|
|
|
|
|