|
Apologies for the shouting but this is important.
When answering a question please:
- Read the question carefully
- Understand that English isn't everyone's first language so be lenient of bad spelling and grammar
- If a question is poorly phrased then either ask for clarification, ignore it, or mark it down. Insults are not welcome
- If the question is inappropriate then click the 'vote to remove message' button
Insults, slap-downs and sarcasm aren't welcome. Let's work to help developers, not make them feel stupid.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
For those new to message boards please try to follow a few simple rules when posting your question.- Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
- Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
- Keep the subject line brief, but descriptive. eg "File Serialization problem"
- Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
- Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
- Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
- If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode "<" (and other HTML) characters when pasting" checkbox before pasting anything inside the PRE block, and make sure "Use HTML in this post" check box is checked.
- Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
- Please do not post links to your question into an unrelated forum such as the lounge. It will be deleted. Likewise, do not post the same question in more than one forum.
- Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
- If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
- No advertising or soliciting.
- We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
What happens if a driver developer sends a command to a sound board ( just a random pick) which the board doesn’t recognize/doesn’t know how to handle? Could that cause a crash of the sound board and require a restart?
If the data on the sound board gets corrupted could that make the entire OS unstable?
modified 22hrs ago.
|
|
|
|
|
As always, it depends on the hardware. The response is going to be dictated by the chip the command was sent to, the code running on the chip, any error handling, or any command/response logic, ...
It could throw an invalid command message back to the driver, it could just ignore the command entirely, it could put the chip is a bad state, ...
If you're the one developing the hardware and driver, everything is up to you.
If you're NOT the one who developed the hardware, there's just too many factors you have no control over.
I seriously doubt there's going to be documentation on the hardware sufficient to tell you what will happen.
But the only way to tell is to try it!
|
|
|
|
|
I find that very interesting
modified 21hrs ago.
|
|
|
|
|
That really goes for any software, doesn't it - driver or whatever?
You may consider a driver to be a process, like any other process in the system. All processes should be be prepared for arbitrary commands and parameters, and reject invalid ones in a 'soft' way - ignoring silently or explicitly rejecting. 'Invalid' includes 'Invalid in the current state'.
I am certainly not saying that all processes (including drivers) do handle all sorts of illegal commands and parameters, just that they should do, driver or otherwise. Regardless of their abstraction level.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Does bios function like an operating system? It captures keyboard input, has the ability to display characters on screen, on my laptop it even displays animations. All this is done without using drivers. What does the battery on the motherboard do? Does it help keep the bios always loaded in memory or is the bios booted from permanent memory when you turn on the PC?
Bios uses the processor to function hence it must be written in assembly, is that how it works?
modified 28-Sep-24 17:15pm.
|
|
|
|
|
You can consider it a very simple OS, lacking a lot of the functions you expect today. Some primitive drivers are part of this "OS". These are mainly for getting the 'real' OS loaded into memory and give it control - often referred to as 'bootstrapping'. In principle, you could have the BIOS load your program rather than the real OS, but most likely, your program would request services that the BIOS doesn't provide.
In the old days (DOS), applications relied on the drivers that are part of the BIOS to handle the few devices that the BIOS have drivers for, such as keyboard and character based output to a screen or printer. Executing driver code out of the BIOS ROM was slow. PCs started to copy the BIOS code to RAM at startup, to run it faster. Often, the BIOS code was limited and couldn't utilize all the functions of the peripheral, so OSes started providing their own code to replace the BIOS drivers.
Today, the OS has its own drivers for 'everything', so those provided by the BIOS are used only for the bootstrapping process. Even though execution out of the BIOS ROM is slow, those drivers are used for such a brief time that it doesn't really matter much. I doubt that any modern motherboard would care to copy the BIOS drivers from ROM to RAM for speedup, the way they did in the old days. Note that in the old days, those drivers were used all the time; the OS didn't have a better replacement driver. So then it made much more sense to copy to RAM than it does today.
When you boot up, the OS isn't there yet, so you need something to read the disk, floppy, USB stick or whatever medium you keep you OS on. If your OS is on a medium for which your BIOS doesn't have a driver (say, a tape cassette), you may be lost - unless your BIOS has a driver for, say, a floppy drive, and you can load the tape station driver from the floppy, and use that driver (loaded to RAM) to load the real OS from the tape. (This is not very common, though.) We had USB for years before we got BIOSes with drivers for the USB interface. During those years, you could not boot your OS from a USB stick the way you can today. Even before that, we had the same issue with CD/DVD drivers: The BIOS didn't have CD drivers, so the CD/DVD drive was useless until the OS had been loaded with its CD drivers.
The mainboard battery: Flash is a more recent invention than the PC. In the old days, the data area used by the BIOS, holding e.g. the order in which to try the various boot devices, was held in what is called CMOS, an extremely low-power, but not very fast memory technology. Functionally, it was used like a flash is used today, but even if it drew almost no current, it was dependent on a certain voltage to keep the state intact. (The C in CMOS is for 'Complimentary', indicating two transistors blocking each other, none of them carrying any current to talk of. But if one of them lets go of its blocking, the house of cards falls down.) I would think that recent motherboards have replaced CMOS with flash, so they will not loose information when the battery is replaced.
The battery has a second function: The motherboard has a digital clock, running even when the power is turned off, the mains cable unplugged, and for a portable, the main battery is empty. This cannot be replaced by any battery-less function. If you have to replace the mainboard battery, expect the clock to be reset to zero. Even if the BIOS makes use of a flash for storing setup parameters, the battery is needed for the clock.
Sure, the BIOS uses the CPU. Or, I'd rather say it the other way around: The CPU uses the BIOS as its first program to run at startup. All CPUs, from the dawn of computers, fetches their first instruction from a fixed address (00000...000 is a viable candidate, but some CPUs differ). That is where you put the BIOS. The BIOS is the first instructions executed by the CPU. You could say that it is much like any other program. In principle, it could be written in any language, but its tasks are so down-to-physical hardware that it very often is written in assembly - at least the initial part of it, setting up the registers, interrupt system, memory management. When that is done, it may call routines written e.g. in C for things like handling the user dialog to set up the boot sequence, report the speed of the fans and all the other stuff that modern BIOSes do today. (Mainboards of today call their initialization code UEFI rather than BIOS, but their primary functions are the same.)
A computer doesn't have to have a BIOS. One of the first machines I programmed did not. When powered on, the PC register was set to 0 and the PC halted. The front panel had 16 switches; the instructions were 16 bit wide. So I flipped the switches to the value of the first instruction and pressed 'Deposit'. This stored the switch positions at address 0 and advanced the PC register to 1. I flipped switches for the next instruction; Deposit stored it at address 1 and advanced to address 2. The mini-driver for the paper tape reader was 15-20 instructions long. Consider that my "BIOS"! After flipping and depositing it, I placed a paper tape in the reader, containing the disk driver. Then I pressed the 'Reset' button, the PC register was reset to 0 and the CPU taken out of halt. The CPU ran the paper tape driver, which loaded the paper tape and at the end of the tape ran right into the code loaded, to run the disk driver to load the OS bootup code that loaded the rest of the OS.
Also, the computer doesn't have to have a built in clock running when the power was off; so it need no battery for that purpose. Most computers have one, but you have to set it after power on. Until the advent of PCs, most computers did not have a battery clock. E.g. after a fatal crash, the operator would have to restart and then set the time explicitly from his own watch.
There is a story about that from the University of Copenhagen - it must have been in the early 1970s: After a crash, the operator set the time and date, but didn't notice that he had typed the year wrong, ten years into the future. This wasn't noticed until after they had run the maintenance program deleting all files that hadn't been referenced for three months. (I guess that is when they noticed it!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Thank you for taking the time to reply tronderen, that’s an interesting post.
|
|
|
|
|
Quote: In the old days (DOS), applications relied on the drivers that are part of the BIOS
How did that function? I don't understand much about hardware or driver programming and I'm looking to broaden my horizons. Without a driver the CPU doesn't know the 'love language' of the equipment that sits in a slot. But only the equipment producer knows how to address the piece of hardware it has produced. Is there a universal language that works for all video cards, sound boards etc.?
Back in the old days (and even today I think, I'm not sure I've only had laptops in recent years) a slot like PCI matched hardware from different categories how did that work?
modified 5 days ago.
|
|
|
|
|
Back in the DOS days, video cards sat at a certain address. The BIOS didn't need any special drivers. It just wrote directly to the addresses the video RAM was at.
Back then, the bus and cards couldn't assign addresses, ports, DMA channels, and IRQs. You had to manually manage the separation of the hardware yourself. Then you told the drivers where the hardware sat in memory and/or how it was configured to listen on.
|
|
|
|
|
Calin Negru wrote: How did that function? The BIOS is really nothing but a function library. In the DOS days, you could in principle call, say, the driver for outputting a character on the serial line by calling the function directly, by its BIOS address. Well, not quite - the return mechanism wouldn't match you call, but we are close. Rather than a direct function call, you used the interrupt system to call the driver function.
You may think of physical interrupts, coming in on the INT or NMI (Non-Maskable Interrupt) line, as a hardware mechanism for calling a driver function when something, like an input, arrives from a device. Hardware will put the input value into a temporary register in the interface electronics (not a CPU register) and the driver function will move the value from that register into memory. Each interrupt source (device), or group of devices, provide an ID to the interrupt system so that a different function is called for each device (group), each knowing that specific device type and how to transfer the value from the interface electronics to the CPU. The interrupt system has a function address table with as many entries as there are possible interrupt IDs, so the ID is used to index the table. This table is commonly called an 'interrupt vector'.
All computers have at least one extra location in the interrupt vector that is not selected by a hardware device ID, but your software can use a specific instruction to make a function call to the BIOS, OS, Supervisor, Monitor, ... Whatever you call it. Intel calls it an INT(errupt) instructions, on other machines it may be called an SVC (SuperVisory Call), MON(itor) call, or similar. On some machines, the instruction may indicate the interrupt number (i.e. the index to be used in the vector), so that different service functions have different interrupt numbers. Others have a single 'software interrupt' number and vector entry, with a single handler that reads the desired service number from a register. Many machines started with giving each service a separate ID, but the number of services outgrew the interrupt vector, so they had to switch to the second method. DOS is a mix: A number of services have their own interrupt ID, but the majority of BIOS driver functions use INT 21, with a service selector in the AH register. (Other DOS multipurpose software interrupts are 0x10 for video functions, INT 13 for low-level disk functions, INT 16 for keyboard functions and INT 17 for printer functions.)
The primary function of an software interrupt call is that of an ordinary function call. But there is something extra: Privileged instructions are enabled, memory management registers are updated to bring OS code into the address space etc. This you cannot do by an ordinary function call. So an interrupt function is not ended with a plain return, but by a specific 'Return from interrupt' instruction that restores non-privileged mode, MMS registers etc. to 'user mode'.
DOS didn't have anything as fancy as 'privileged instructions' and MMS. So the main purpose of software interrupts were to make the application independent of, say, the location of the serial line handler. Regardless of BIOS version or vendor, to call the driver function for outputting a character on the console, you executed an INT 21 instruction with 2 in the AH register and the character code in the DL register. You may consider the BIOS specification similar to a high level language interface specification: It provided detail parameter and functional information, and the BIOS vendor provided the implementation of this interface.
Back in those days, interrupts were fast! I worked on machines designed in the mid 1970s: The first instruction of the handler routine was executing 900 ns (0.9 microseconds) after the signal occurred on the interrupt line. (For the 1970s, that was quite impressing!) Later, memory protection systems became magnitudes more complex, and getting all the flags and registers set up for an OS service has become a lot more expensive. Processors have long pipelines, and you have to (or at least should) empty them before going on to a service call. Software interrupts of today take a lot more time in terms of simple instructions execution times, compared to 50 years ago. When the 386 arrived with a really fancy call mechanism, all sorts of protections enforced, MS refused to use it in Windows - it was too slow. (They rather requested a speedup of Illegal Instruction interrupt handling, the fastest way they had discovered to enter privileged mode.) That is why Win32 programs never had access to 4 GiB address space: With the 386 call mechanism, user code and OS could have separate 4 GiB spaces, but MS decided that '2 GiB should be enough for everybody', so that they could use a faster interrupt mechanism that made no MMS updates.
But only the equipment producer knows how to address the piece of hardware it has produced. Is there a universal language that works for all video cards, sound boards etc.? There are lots of hardware standards for each class of hardware. The video card makers, or USB makers, or disk makers, sit down to agree on a common way to interface to the PC: They will all use this and that set of physical lines, signal levels, control codes etc. Then the driver on the PC side may be able to handle all sorts of VGA terminals, say. Or each video card vendor's interface on the PC bus, because they all use the same interface.
Over the years, such industry standards have grown from specifying the plug and voltages, and little else, to increasingly higher levels. USB and Bluetooth are primary examples where this is prominent: Very general 'abstract devices', such as a mass storage device, are defined by the interface, and the manufacturer on the device side must make his device appear as that abstract device, no matter its physical properties.
Furthermore: In the old days, we often for a few years had a multitude of alternatives, with highly device specific drivers before the vendors got together to clean up the mess. Nowadays, new technology (such as USB3, Bluetooth 5.0) tends to come from the very beginning with standards for use. Today's standards tend to be far more forward-looking than the old ones: E.g. they have open-ended sets of function codes, exchange lots of configuration values for bitrates, resolutions, voltages, ... so that the standard can live long and prosper. If the other part cannot handle the recent extensions, such as a higher resolution, it report is, and that extension isn't used on the connection.
Almost all general peripherals of today present themselves as one of those abstract devices defined for the physical interface. You still need a driver for each of those, but there aren't that many different ones. For special purpose equipment you still may have to provide a special driver, because it provides functions not covered by any of the standard abstract devices. If it uses a standard physical device, say USB, it hopefully uses that in a standard way so that you can use a standard USB driver and only have to write the upper levels of the driver yourself.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Another excellent response. Do you think it is worth consolidating all this into an artice?
|
|
|
|
|
Hello,
I use Arduino Uno to read the voltage change across a Thermistor terminals.
To read The temperature, I would use Steinhart–Hart equation:
I/T=A + B LnR + C (Ln R)^3 to convert voltage to temp, I can write this equation using C++ via Arduino IDE, then I'll get the Temperature.
My question is: how to do it without using the Arduino, I mean using only electronic components, what is the circuit design that can give me a Ln or cubic power?
Thank you
|
|
|
|
|
The short answer is to start with Log amplifier - Wikipedia[^]. You can assemble a bunch of them to do the trick, but that is really doing tings the hard way. For limited temperature spans, there are simpler approximations for linearizing to a reasonable accuracy. Feed any search engine with "linearize thermistor"
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
I’m trying to get a better understanding of how RAM memory works. I brought up this question before. This time I’m trying to find out a little bit more.
There is no optical fiber on the motherboard hence in the scenario where you want to place 32 bits in memory if you want to send them at one time you need 32 copper lines connecting the CPU socket to the memory slots. What happens if you want to send more information. Let’s say you want to send four 32 bit integers. Before sending the actual data does the CPU ( the Operating System) use the same 32 lines to instruct the memory slots where should the integers about to be sent be placed?
How does the memory know the address range in which it should place the four integers?
|
|
|
|
|
CPU sockets of today has a tremendous number of 'pins' (they aren't really pins nowadays, but the name sticks), typically 1200-1500. Usually, far more than 32 of these carry data to/from RAM. More typical is 128 or 256, the length of a cache line. If you want to modify anything less (such as a single byte or a 32 bit word), the CPU must fetch the entire cache line from RAM, modify the part of it that you want to modify, and write the entire cache line back to memory.
The CPU uses another set of pins to indicate the RAM address to be read from or written to. Since the arrival of 32 bit CPUs, the CPU has rarely been built to handle as much RAM as the logical address space; the 386 did not have 32 address lines, you could not build a 386 PC with 4 GB memory. Nor does today's 64 bit CPUs have 64 address lines. The memory management system will map ("condense", if you like) the used memory pages spread over the entire 64 bit address space, even multiple 64 bit spaces - one for each process, down to the number of address pins required to cover the amount of physical RAM that you have got.
Note that when transfers between RAM and the CPU cache (inside the CPU chip) goes in units of an entire cache line of, say, 128 bits or 16 bytes, there is no need to tell the RAM which of the 16 byte(s) are actually used - they are transferred anyway. So there is no need to spend address lines for the lowermost 4 bits of byte address. The number of external pins is a significant cost factor in the production of a chip, so saving 4 bits gives an economic advantage.
In the old days, pins were even costlier, and you could see CPUs that first sent the memory address out on a set of pins during the first clock cycle. The memory circuits latched this address, for use in the next clock cycle, when the CPU transferred the data value to be written on the same pins as those used for the address. Or reading: In the first cycle, the CPU presents the address it wants to read; in the next cycle, the RAM returns the data on the combined address/data lines. There were even designs where the address was too long to be transferred in a single piece: In cycle 1, the high address were transferred, in cycle 2 the low address, and in cycle 3, data was transferred. (And in those days, you fetched/wrote a single byte at a time, and cache was rarely seen.)
This obviously put a cap on the machine speed, when you could retrieve/save another data byte no faster than one every two or three clock cycles. To win the speed race, general processors today have separate, wide address and data buses. I guess that you still can see multiplexed address/data buses in embedded processors (ask Honey about that!).
Your scenario with four 32 bit words to be saved: If they are placed at consecutive logical addresses, as if they were a 16 byte array, they might happen to fit into the same cache line. When the cache logic determines that it is necessary to write it back to RAM, one address is set up on the address lines, and a single transfer is made on the data lines. If the 16 bytes is not aligned with the cache borders, but spans two cache lines, each of the two parts are written to memory at different times, in two distinct operations. If the four words are located in distinct, no-coherent virtual addresses, they are written back to RAM in 4 distinct operations: 4 addresses on the address bus, each with a different cache line contents on the data bus. Note that the entire cache line is written in each of the write operations, and could include updates to other values in the same lines that hadn't yet made it to RAM.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I get the picture, thank you
|
|
|
|
|
I have been graciously given PCB with LED's to monitor SOME serial data. It has LED for DRX and DTX.
My current serial data code only sends and I have no connection to any "remote serial device" , but I can see both DRX and DTX flashing. Good.
BUT
why is DRX flashing?
Is is because my "serial data communication" is set for "local loop back"?
How do I verify my "modem" settings" AKA "AT" commands ?
Thanks
|
|
|
|
|
jana_hus wrote: How do I verify my "modem" settings" AKA "AT" commands ?
Specific question gets a list of sites that can help you.
modem at commands - Google Search[^].
modified 10-Sep-24 16:14pm.
|
|
|
|
|
FROM https://e-junction.co.jp/share/Cat-1_AT_Commands_Manual_rev1.1.pdf:
Quote: 2.29. Controls the setting of eDRX parameters +CEDRXS
Syntax
Command Possible Responses(s)
+CEDRXS=[<mode>,[,<acttype>[,<requested_edrx_value>]]]
+CME ERROR: <err>
+CEDRXS? [+CEDRXS: <acttype>,<requested_edrx_value>[<cr><lf>
+CEDRXS: <act-type>,<requested_edrx_value>[...]]]
+CEDRXS=? +CEDRXS: (list of supported <mode>s),(list of supported
<act-type>s),(list of supported <requested_edrx_value>s)
Description
The set command controls the setting of the UEs eDRX parameters. The command controls whether
|
|
|
|
|
thanks for the reply.
I probably did not formulate my question correctly. I was not asking about AT commands.
I was trying to verify if some serial port parameters are being used to put the USB port itself into "loopback mode". Actually I am not sure if Linux serial port can use AT commands at all.
I guess I need to look if Linux has "default modem" anything.
|
|
|
|
|
Hi Jana,
Can you try these
DRX Flashing (Local Loopback)
DRX flashing could be due to local loopback. Check and disable it with:
stty -F /dev/ttyUSB0 -echo
Use minicom to send AT commands:
sudo apt-get install minicom
Open your serial port:
sudo minicom -D /dev/ttyUSB0
Type AT to check response (OK if working).
I hope this will work for you and resolve your issue.......
|
|
|
|
|
So, over the last 20+ years, I think I'm on my 4th laptop. I buy them for development, and I specifically look for expandability and reliability. I rarely toss them (I'm working with my therapist on this). I used to buy Dell, but they got squirrely with their consumer brands (xps1530 was excellent except for the motherboard graphics chip set). The last Dell I bought was a precision M4700 and it's a beast. I've moved on to Eluktronics - sort of a custom maker, but damn good hardware. Just watch out when they solder ram to the motherboard, but that's on me. Anyway, back to the 4700...
So, not the thinnest or lightest, but excellent display, great keyboard, add SATA SSDs and it will just go. But I had this quirky issue where it would blue screen and weird random times. This has been happening for over 5 years. Cleaning it up, it needed a new battery - $40 later done. no more complaints about charges, but boom blue screen. On reboot, it kept fussing about losing it's BIOS settings. Now, when any computer says this, the battery backup on the motherboard is dead. I had never thought of this. These are $2 from your grocery store. $5 if dealing with Dell's stupid stuff.
The laptop has been sitting there running for 2 weeks, all issues resolved.
When in doubt, go for the simple solution.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Okay, I would like ideas from people that are smarter than I am about my next great USB plan.
THE PROBLEM...
I find that I keep on having these mysterious behaviors Which cost me over an hour to locate, and it always turns out to be that the USB hub has just plain and simply worn out physically.
MY NEXT SCHEME...
- Build my own computer
- Install a specific USB Expansion Controller Card Adapter (The more ports, the better)
- When I buy that card adapter, I will buy two or three (identical) replacements for the future when its jacks wear out
- Use the same cheap hubs that wear out after a year or two, And plug them into a specific Jack on the controller card.
Put this all together, and I'm wondering if this will provide me a workable solution until the time that USB goes away and is replaced with the next Disco Baboon Technology of the future
|
|
|
|
|