|
Calin Negru wrote: a high power current acts as a switch that can turn on and off the circulation of a lower power current Rather the other way around: A low power current can switch a higher power current on and off. Or, in analog transistors: Turn up or down the high power current proportionally to the controlling low power current. So the purpose of the transistor was to amplify the signal.
In digital circuits, you really do not need this amplification. The output from the first transistor need only be strong enough to turn the second transistor on (i.e. opening it to let a signal through) or off.
Under special circumstances, where the output of the first transistor is distributed to a whole row of second transistors, e.g. located on the row of plugin cards on a mainboard bus, the output signal must be strong enough to feed everyone of them. You don't see much of that any more: In the days of S100, ISA and MCA buses, you could plug 4-5-6-7 cards into a bus, side by side - the bus was like an AC power strip, delivering signals to a lot of recipients. You don't see much of that any more, partly because lots of what once required a large extension card now is provided on the CPU (or supporting 'chip set'), and partly because new bus standards have reduced the maximum 'fan out', to reduce the requirements for the bus electronics. Actually, lots of what we today refer to as 'bus' interconnects are really one-to-one signal lines.
(For the pedantic ones: It still isn't wrong to call it a 'bus': (Omni)bus means no more than 'For everyone'. In the days when the COM and LPT ports were used for 'everything', they were '(omni)busses', linguistically speaking.)
But talking about bus fanout and that sort of thing are special cases. Within a CPU, the current delivered from the output of a transistor is always enough to drive the input of the following transistor(s), even if there might be two or three of them receiving the same signal.
|
|
|
|
|
trønderen wrote: Or, in analog transistors: I'm one of the pedantic ones.
A transistor is a transistor. There's no analog transistor and no digital transistor, it's a transistor.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
OK, you are certainly right about that.
Also: Turning an ordinary light switch 'off' doesn't create an absolute insulation between the poles of the switch. The air gap just increases the resistance. A sufficiently high voltage may be able to cross that air gap.
Certainly: That kind of voltage would also be able to do wonders to your PC and other semiconductor eqipment.
|
|
|
|
|
trønderen wrote: That kind of voltage would also be able to do wonders to your PC
Not to mention flammable materials as well.
|
|
|
|
|
Thank you for your answer
> Rather the other way around
In that case doesn’t the output power of the first transistor need to be reduced? To create Boolean logic you need low power to act as a switch on the second transistor
modified 14-Dec-23 7:38am.
|
|
|
|
|
Digital transistors are not built for amplification. Essentially, the signal being controlled is at the same level as the controlling one. The control consists of either let the controlled signal through, or to stop it. (Sort of like the main valve to turn on/off the water supply to your house: It is either open or closed, not intended to be in any intermediate position.)
Like a water flow: If you open a valve completely, you won't have an infinite water flow, only as much as the source will supply. Same with transistors: A fully open transistor lets through whatever wants to get through, but in a digital circuit, that is not much more than the controlling signal.
Both the controlling and the controlled signal are low power. In a modern CPU, such as an x64 CPU, that is really low power! I willingly admit that I do not know how low, but would be curious to know! Even if could have gotten access to the transistor (which is completely impossible inside the CPU) my multimeter would not be able to measure it. Not by several orders of magnitude.
|
|
|
|
|
The simplest transistor to understand is the MOSFET used in most computer logic circuits.
The name describes the construction and how it works 😋
Metal-Oxide-Semiconductor is a thin aluminium on a thinner glass layer on a doped silicon surface. The doping impurities change the silicon's behaviour.
The FET part of the name is Field Effect Transistor. When a voltage is applied to the thin metal layer (gate) the charge acts across the insulating glass layer to pull carriers to the surface of the silicon, making a greatly more conductive channel for current at the surface.
A very small amount of power to charge the gate can control much larger currents in the underlying semiconductor.
|
|
|
|
|
You can also make a transistor from a crystal and two needles
I guess that even software people have seen the classical transistor symbol used in circuit diagrams: A circle enclosing two slanted lines (one with an arrowhead), one from each side onto a bar, and a third line down from the bar; that is the control signal, steering how much current it let through from one needle to the other. The symbol is is a stylized drawing of the crystal and the two arrows. (For FETs the symbol is slightly modified; the lines are not slanted.)
I have been with semiconductors for so long that I have been thinking of crystal transistors as something they used two generations ago. So I was surprised to discover that you still can buy them, in a lot of varieties; they are often preferred in certain high frequency radio application. These are individual transistor components, usually in a sealed glass capsule. You wouldn't build a computer from them
|
|
|
|
|
|
I've been grappling with a frustrating issue lately and thought I'd seek some advice from this knowledgeable community. The problem at hand is that I'm unable to write to my NTFS-formatted external hard drive when it's connected to my Mac.
Here's a bit of background: I've been using this NTFS-formatted external hard drive primarily on a Windows PC. However, now that I'm a recent Windows-to-macOS switcher and working on my Mac, I find myself unable to add or edit files on this drive. I need to collaborate on some documents, and it becomes a bit of a headache.
After some research, I'm aware that macOS doesn't offer native write support for NTFS drives. Formatting to exFAT is an option, but it means moving all my data somewhere else temporarily, which led me to consider third-party solutions. One NTFS for Mac solution that's come up is iBoysoft NTFS for Mac. It claims to enable full read-write access to NTFS drives on Mac, but I wanted to gather some insights and experiences from the community here.
Has anyone been through this before? If so, how did you resolve it? If you've tried iBoysoft NTFS for Mac or have alternative suggestions, please do share your thoughts and experiences. I'm eager to learn and resolve this problem.
Thanks a bunch!
|
|
|
|
|
No idea.
But I will note that I had to read your entire post to figure out your real question. And read it twice just be be sure. Certainly not what I expected from the subject line.
What you actually are asking is ...
'Have you tried iBoysoft NTFS? If so did you like it? Would you suggest any alternative?'
|
|
|
|
|
Googling ntfs for mac produces lots of hits, including this one 6 Best NTFS for Mac Software in 2023 [Free and Paid]
All I can recommend is that you take a look at some of the reviews and choose what you think will be best for your use case.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
A memory pcb piece that comes mounted in a mother board memory slot has only about 100 pins. 4 Kilobyte of memory is a thousand ints on a 32 bit machine. How do you read or write a thousand ints through a bottleneck of 100 pins? Thank you
modified 21-Oct-23 10:20am.
|
|
|
|
|
One by one. Or, I believe, two by two for the current DDR generations (64 bits data bus).
Actually, current DDR memory generations have up to 288 pins. The CPU presents an address on one set of pins (on some memory types in two pieces, first the high order bits, then the low order bits), fires a 'read' signal, and the selected data bits come flowing out on the data pins. Or the CPU both presents the address and the data and fires a 'write' signal that reads what data the CPU presented into the addressed location.
|
|
|
|
|
Why is that so hard to understand? Think of it in terms of reading/writing one word at a time. In your case, 32-bits. That easily fits in a hundred pins, though I don't remember any memory modules with that pin count.
There were 72-pin SIMMs that handled 32-bit transfers, and that jumped to 168-pin DIMMs with a 64-bit transfer.
Besides power, ground, and control pins, you only need 1 pin per data bit and 1 pin per address line, so it's really not hard to understand.
Anandtech RAM Guide[^]
|
|
|
|
|
Thank you I understand. On visual inspection the pcb itself appears pretty simple, my guess is that simplicity is deceiving.
|
|
|
|
|
A cpu has a lot of pins. What are they meant for? Do they have a general purpose usage or does the processor have specialized groups of pins where each group talks to certain type of resource on the motherboard (video adapter, sound board, etc)
|
|
|
|
|
Go to the chip manufacturer's web site and get the datasheet.
|
|
|
|
|
|
A lot of those pins are power and ground. In simple terms, pins go to memory and buses, but not to individual boards, like video and sound. Those peripherals are connected to an expansion bus, like PCI-E, and the chip talks to devices through the bus.
|
|
|
|
|
>but not to individual boards
I know it’s not dive in the cpu socket pop up in the sound or video card slot type of circuit, there is some mediation in between the two.
Thanks for your answer.
modified 7-Oct-23 12:29pm.
|
|
|
|
|
The driver problem. When an program is run it’s machincode doesn have the data required to run the sound card or the Mother Board or another piece of hardware because all computers contain hardware that is different, it comes from different vendors.
Companies provide drivers for their equipment. Is a driver the place from where an Apps program machine code gets at run time the resource addresses required to get the sound board, MB etc working, is it some kind of compile at runtime?
modified 7-Oct-23 14:17pm.
|
|
|
|
|
|
|
Calin Negru wrote: Do they have a general purpose usage or does the processor have specialized groups of pins where each group talks to certain type of resource It varies a lot from chip to chip, or - for PC type chips - from socket to socket.
Chips meant for embedded use, IoT-style, may have general I/O-pins that can be configured by software to behave according to this or that signal standard, among maybe 2-4 different ones (but not too different from each other). This rarely if ever means that the CPU switches among different I/O-standards dynamically; at boot time, it configures the chip to the protocol it uses for communicating with other chips, and then is stays that way. The reason for having this configurability is that the CPU manufacturer can offer a single chip model to users of several different I/O-standards. Making two, three, four different chips, one per I/O standard is far more expensive. But note: I am not talking about x86 or x64 chips now - more like 8051 or ARM M0-chips.
When an program is run it’s machincode doesn have the data required to run the sound card or the Mother Board or another piece of hardware because all computers contain hardware that is different, it comes from different vendors. As Dave Kreskowiak pointed out: I/O on 'PC class' (general, x86/x64 or similar), is generally done using some standardized hardware at the physical level, e.g. USB, PCIe, ... (or earlier: FireWire, COM-port, VGA, ...). The external device is responsible for adapting to one such standard at the physical signal level.
The PC uses low-level, usually OS provided driver software for transferring bytes, or frames of multiple bytes, from the CPU out on the interface. This is independent of whatever device is in the other end. The device must be able to receive bytes or frames, but this is given by the standard (e.g. by USB) and is not device dependent. Of course this goes both ways, for both input and output.
The driver for the physical interface offers an API that is independent on the actual electronics, and usually standardized (although there may be a couple alternatives) for that class of physical interface. Say, the low-level USB driver offers a standard API for sending a USB frame to a given USB address.
The byte stream, or contents of the frames, contains commands and data that may differ from device to device. This is where you may need a device specific driver. Say, your scanner software uses a standard format for commands and data called TWAIN. You scanner vendor provides a higher-level driver that offers a TWAIN API, with scanner-related commands, and creates a message struct, which it asks the USB low-level driver to pass to the scanner. If you replace your scanner, the application software may still use TWAIN calls, but that scanner vendor provides a different driver, creating different scanner commands, as required by his scanner model. But both the old and the new driver will send their (different) scanner command structs to the same low-level USB driver API. Note that this higher-level driver doesn't relate to the electronics; it leaves that to the low-level USB driver. (Or in the old days: SCSI or FireWire drivers.)
In practice, there will be more levels of drivers. A scanner specific TWAIN driver doesn't go all the way down to USB low level drivers in one sweep. E.g. a command struct, or even more: returned data, may be far too large for a physical USB frame. So the TWAIN driver creates the struct of arbitrary length, and hands it over to a medium level driver which cuts the struct into suitable small pieces, and labels the pieces appropriately, e.g. with a sequence number, for sending one by one to the USB low level driver. For incoming data, this driver level will receive a lot of small frames from the low level USB driver, collect them in the order given by their sequence numbers, and when all pieces have arrived and been glued together to one large struct, it sends it to the higher TWAIN level (or whatever resides above it).
You may see quite a few driver levels, one above the other. Somewhere I read that the Windows driver model defines 32 levels(!). That doesn't mean that you have 32 driver software packages, only that if your protocol needs to do so-and-so, you should do it after this but before that. The so-and-so's are ordered into 32 groups. One driver package may take care of six consecutive groups, another of the next 8. Often, there will be nothing to do in a lot of the groups.
The lowest driver level(s) is common for all devices. Maybe the next few as well; e.g. splitting a large message into USB frames can be done the same way for all kinds of devices. The higher you climb the stack of driver levels, the more likely it is that from that point and up, you have device specific drivers - until you come close to the application, e.g. the TWAIN API that is common to all (or a lot of) scanners. Anything above that is not device dependent.
For some device classes, there are standards going all the way up from physical drivers to application level. E.g. for mass storage devices, there are USB standards covering magnetic disks, memory sticks, external 'passport' flash disks etc. The device will, in its end of the cable, implement all the relevant driver levels, and do the adaptation to its own technology on top of that. The application program on the PC uses the API to the uppermost driver level, and everything below it, with no concern about the technology at the other end. This of course requires the device to have some sort of CPU to decipher incoming messages and to build a reply message, but today even safety pins come with build in CPUs (maybe not this year's model, but I am sure that 2024 safety pins will have CPUs ).
is it some kind of compile at runtime? A device vendor providing a driver will usually give you a binary, ready to run driver. So he must provide different drivers for CPUs with different instruction sets (e.g. x86, x64, ARM 32 or 64 bits). Usually, a driver is not completely self contained but uses OS services, at least for installation and activation, so there must be different driver versions depending on the OS. They will all be different binary versions, and the installer picks one of them. I.e. no runtime compilation.
Finally: You will rarely if ever see a driver that takes you all the way from the application program down to the physical interface. It will make assumptions that 'someone' provides lower level drivers according to this or that standard. And it will assume that 'If I provide an API according to this or that standard to drivers at higher levels than myself, they will be able to use it'. I have seen quite a few device drivers providing the truly device specific middle/upper layers, but the installer will check if the lower and higher level drivers exist in the system. If they do, they are used. If not, the driver packages has got its own implementation, and they are installed along with the main device driver.
Dave Kreskowiak wrote: Drivers are part of the O/S, not the applications. At least for the most part, you are certainly right. It can be argued that, say, when I print an MS-Word document to a Postscript printer, the PS generating function is functionally identical to a top level driver for a PS device. A PDF print routine is a top level driver for a PDF device. Generating the PS or PDF protocol elements is usually done at the application level (it may be done by a library routine, yet at application level.)
|
|
|
|
|