|
jschell wrote: The form suggested by the other response seems much more maintainable.
After perusing the thread, because it has been a minute, the only proposal that I am not already doing, or wouldn't make sense to do since it moves compile time actions to run time, was to leave the initial preprocessor macros, but those original preprocessor macros were never presented, so I'm confused as to how you could say it was more maintainable having never seen it.
Furthermore, it is not feasible to limit said preprocessor macros to a single file as doing so would increase maintenance since several processors with their own instructions are presented, each in its own header. To combine them would create a monolithic header that included every single build target. No.
Finally, unit testing this is not feasible, as I don't have the money to set up the hardware array necessary to unit test this across all build targets, nor the toolchain infrastructure necessary to make that doable, and would probably have to move away from using PlatformIO to do so anyway, which would make the effort required to do what I am doing snowball once I'm stuck with CMake builds using LLVM with gcc in funky ways. And even if I had the money, the time, and the software infrastructure to make that work, I don't have the physical space to set it all up.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: so I'm confused as to how you could say it was more maintainable having never seen it.
A specific code sample was presented in a comment. Then you said.
"The reason that I wouldn't do what you're doing is I am a stickler about namespace polution."
You statement strongly suggested that you were responding to the code sample.
I did of course see the code sample in the posted response and then I responded to your comment about that code sample.
honey the codewitch wrote: Furthermore, it is not feasible to limit said preprocessor macros to a single file as doing so would increase maintenance since several processors with their own instructions are presented, each in its own header. To combine them would create a monolithic header that included every single build target. No.
No idea what you are talking about.
If you have one and exactly one C or C++ file (a .c or .cpp file) then you can define macros at the top of that file. The scope of those macros, in both languages are limited exclusively to that file. That is how those languages work.
If you have several files that use the same code then you can put the macro(s) and only those macros in an h file and then use that h file only in those code files. That means specifically that you do not put it in some other h file for convenience.
Now if you have the code snippet (not the macros) in an h file, then I suggest moving it out of the h file because it should not be in there in the first place.
honey the codewitch wrote: Finally, unit testing this is not feasible, as I don't have the money to set up the hardware array necessary to unit test this across all build targets
Huh?
A "unit test" in the context my statement would never be tested in that fashion. You are testing the macro not the system.
|
|
|
|
|
Presently the macros are in about 6 h files.
I did not write it that way, and I'm not rewriting six files of macros.
I am not unit testing all of this garbage because it's impossible. It's all timing sensitive and each one is different for each piece of hardware. No.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: I did not write it that way, and I'm not rewriting six files of macros.
As I already said in my previous post...Your statement strongly suggested that you were responding to the code sample.
To be very clear - that would be the code sample that was posted in this thread and not your code.
That code sample does not require "six files of macros".
|
|
|
|
|
honey the codewitch wrote: the *first* time I do it in the code. I had heard of weird places to do it, but in the code....
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Yep, you are right, I would not want to see this preprocessor macros, that would fry my brain.
Could you put the pins_l and and pins_h into a single uint64_t or is that not supported?
That would make the processing easier.
Failing that you could simplify it a little.
if(pin_d0>31) {
b = (pins_h>>(pin_d0&31))&1;
} else if(pin_d0>-1) {
b = (pins_l>>pin_d0)&1;
} else {
b=0;
}
I am sure there are other optimisations that could be done, but I am done for the day. Good luck!
|
|
|
|
|
It would require more shifts because those routines that fill it return uint32_t
Also it supports it but the proc is native 32-bit and I don't want to have any hidden overhead for uint64_t.
Finally, in some of the code paths, I am only using pins_l, or pins_h. Only in one fork am I using both.
Real programmers use butterflies
|
|
|
|
|
I realise you're not asking for suggestions but I would certainly be minded to simplify this, particularly if this is for general purpose use. If you're in a situation where this is for an embedded platform where the hardware configuration is fixed you should gain both better readability and performance if you can spare a little RAM. Something like this:
const int NUM_BUS_BITS = 8; const int NUM_GPIO_PINS = 40;
static uint32_t sGPIOLow, sGPIOHigh;
static struct
{
uint32_t* pSource; unsigned shift; } sDataMap[NUM_BUS_BITS];
static void mapInputPin(unsigned dataBit, unsigned gpioPin)
{
if (dataBit < NUM_BUS_BITS && gpioPin < NUM_GPIO_PINS)
{
sDataMap[dataBit].pSource = gpioPin >= 32 ? &sGPIOHigh : &sGPIOLow;
sDataMap[dataBit].shift = gpioPin & 31;
}
}
static uint8_t readInput(void)
{
sGPIOLow = gpio_input_get();
sGPIOLow = gpio_input_get();
sGPIOLow = gpio_input_get();
sGPIOHigh = gpio_input_get_high();
uint8_t dataRead = 0;
dataRead |= (((*sDataMap[0].pSource) >> sDataMap[0].shift) & 1) << 0;
dataRead |= (((*sDataMap[1].pSource) >> sDataMap[1].shift) & 1) << 1;
dataRead |= (((*sDataMap[2].pSource) >> sDataMap[2].shift) & 1) << 2;
dataRead |= (((*sDataMap[3].pSource) >> sDataMap[3].shift) & 1) << 3;
dataRead |= (((*sDataMap[4].pSource) >> sDataMap[4].shift) & 1) << 4;
dataRead |= (((*sDataMap[5].pSource) >> sDataMap[5].shift) & 1) << 5;
dataRead |= (((*sDataMap[6].pSource) >> sDataMap[6].shift) & 1) << 6;
dataRead |= (((*sDataMap[7].pSource) >> sDataMap[7].shift) & 1) << 7;
return dataRead;
}
int main(void)
{
mapInputPin(0, 4);
mapInputPin(1, 5);
mapInputPin(2, 6);
mapInputPin(3, 7);
mapInputPin(4, 36);
mapInputPin(5, 37);
mapInputPin(6, 38);
mapInputPin(7, 39);
uint8_t readData = readInput();
return readData;
}
The readInput() function is basically what you have above but doesn't require the conditionals since you've defined ahead of time where data should be read from and how it should be shifted. It reads both low and high GPIO registers but I doubt that will be noticeable but could be simplified prior to the decoding if absolutely necessary.
The multiple reads from the GPIO bus is a bit of a red flag, however. I am not familiar with the platform (I'm guessing ESP32) but that would usually suggest to me that a hardware clock hasn't been set up correctly or that your timing is marginal. If it's the latter then I'd suggest you need a more robust delay mechanism as you may be lucky on this board but it'll be intermittent on others.
|
|
|
|
|
Those conditionals aren't executed at runtime. They are constexpr, as are most of the shifts, and the pin #s themselves.
The reason for the multiple reads is timing. There is no hardware clock involved other than the CPU clock itself as the ESP32 does not have a hardware parallel 8-bit interface. It's all bit banging.
The other ways I could have done it would have required me to drop to asm to insert the appropriate amount of delay because the arduino and idf framework delays are not granular enough. I need to cycle count. I'm not putting asm in my code, because ESP32s don't even use the same CPU across all iterations of them.
Real programmers use butterflies
|
|
|
|
|
OK - it wasn't immediately clear that the bit mapping was constant at compile time. Personally, I'd still be inclined to do something like this but would want to confirm it was optimising as expected (which I'm sure you've done with the existing code):
const unsigned pin_d0 = 4;
const unsigned pin_d1 = 5;
const unsigned pin_d2 = 6;
const unsigned pin_d3 = 7;
const unsigned pin_d4 = 36;
const unsigned pin_d5 = 37;
const unsigned pin_d6 = 38;
const unsigned pin_d7 = 39;
static uint8_t readInput(void)
{
uint32_t GPIOLow = gpio_input_get();
uint32_t GPIOHigh = gpio_input_get_high();
uint8_t dataRead = 0;
dataRead |= (((pin_d0 > 31 ? GPIOHigh : GPIOLow) >> (pin_d0 & 31)) & 1) << 0;
dataRead |= (((pin_d1 > 31 ? GPIOHigh : GPIOLow) >> (pin_d1 & 31)) & 1) << 1;
dataRead |= (((pin_d2 > 31 ? GPIOHigh : GPIOLow) >> (pin_d2 & 31)) & 1) << 2;
dataRead |= (((pin_d3 > 31 ? GPIOHigh : GPIOLow) >> (pin_d3 & 31)) & 1) << 3;
dataRead |= (((pin_d4 > 31 ? GPIOHigh : GPIOLow) >> (pin_d4 & 31)) & 1) << 4;
dataRead |= (((pin_d5 > 31 ? GPIOHigh : GPIOLow) >> (pin_d5 & 31)) & 1) << 5;
dataRead |= (((pin_d6 > 31 ? GPIOHigh : GPIOLow) >> (pin_d6 & 31)) & 1) << 6;
dataRead |= (((pin_d7 > 31 ? GPIOHigh : GPIOLow) >> (pin_d7 & 31)) & 1) << 7;
return dataRead;
}
In practice, I would almost certainly encapsulate the individual lines as macros or as an inline function but it's probably clear enough what it's doing. Reads from both ports are kept in there - if a single hardware register read is really killing you then it could be eliminated.
The delay aspect would still concern me. This is presumably reading from an external device (LCD?) and there will be a defined timing between initiating a read and the data being available. You will get away with your approach on a specific hardware platform but in future it could easily fall down if the clock speed changes or the external delays change. Dropping to assembler shouldn't be necessary - but your code shouldn't depend on how long it takes to do a dummy read.
As an aside, the original code has instances of this:
if(pin_d1>31) {
b |= (((pins_h>>((pin_d1-32)&31))&1)<<1);
} else if(pin_d1>-1) {
b |= (((pins_l>>(pin_d1))&1)<<1);
}
Subtracting 32 from the pin value before masking serves no purpose - you're only using the lower bits anyway.
If you ensure your pin values are non-negative (which could / should be possible at compile time) then it is also redundant to test against -1.
|
|
|
|
|
Message Closed
modified 13-Dec-21 21:05pm.
|
|
|
|
|
Any particular reason you posted this in The Weird and the Wonderful?
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|
Yeah I just deleted it. I accidently landed here instead of the lounge and I wasn't paying close enough attention apparently.
Real programmers use butterflies
|
|
|
|
|
|
It is according to the Scriptures, telling both the language and what we should use it for: Go Forth, and multiply.
|
|
|
|
|
I always thought that was the polite way of telling someone to f*** off.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
If you are telling them to “go away”, then you probably do not want THEM to multiply.
Multiply as in procreate.
|
|
|
|
|
I am dead certain about I feel : avoid it at all cost. Forth is a write-only language. I almost had that booger flicked on me once but I dodged it, thankfully.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Agreed. Back in the Long Ago Time, when I was in college, I took the computer graphics sequence and even did an independent study course. All of the programming was in PL/I-80[^] on Z-80 CP/M machines.
The year after that, some numbnuts in the Computer Science department switched to using Forth. The courses were still taught using the classic Fundamentals of Computer Graphics by J. D. Foley and A. Van Dam which used a procedural pseudo-code for presenting algorithms.
The pseudo-code made sense when implemented in PL/I. In Forth, not so much .
Software Zen: delete this;
|
|
|
|
|
That sounds frightening. I can not imagine a worse language to learn computer graphics than Forth.
I used a variant of PL/I called PL/M on Intel 8085 development systems with ICE many, many years ago. It was servicable. For simple, senior-level projects it was.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Why worry about the Y10K problem if it is going to happen many centuries after your death? Exactly because you will already be dead, so the companies using your software will be stuck using your software without any other coder who knows the system well enough to come in and fix it.
If humanity survives the next 7978 years, is still using something we would recognise as computers, is still running software created today, and hasn't either developed advanced AI capable of fixing this sort of problem, or trained enough people to fix it, then something has gone very wrong.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
My software is written so well that people (or our lizard overlords...) will still be using it ~8000 years from now.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
"The future's uncertain and the end is always near." -- Jim Morrison (December 8, 1943 – July 3, 1971)
|
|
|
|
|
In 1998, a programmer who had been working on Y2K fixes started to get anxious because he couldn't believe how pervasive the problem was. He switched from company to company trying to get away from it, but everywhere he went he became regarded as the Y2K expert and immediately became the team lead for that company's Y2K contingencies. He finally had a nervous breakdown, quit his job, and decided he wanted to be knocked unconscious when the Y2K actually came about.
A month before Y2K he was put into an artificial coma and cooled down to a near cryogenic easily sustained long term life support.
Unfortunately the life support notification system had a Y2K bug, and no one revived him for 8000 years.
Finally he was found and revived. He woke up, and saw himself surrounded by lots of glass, light, stainless steel, and tall beautiful people in white robes. He asked if he was in Heaven.
They replied, "No, this is Chicago. Actually but it's a lot like Heaven to someone like you."
"Someone like me?"
"You are from the 20th century. Many of the problems that existed in your lifetime have been solved for thousands of years. There is no hunger and no disease. There is no scarcity, or strife between races and creeds."
"What year is it now?"
"Yeah, about that - it's the year 9,998. You see, the year 10,000 is coming up, and we understand you know something called COBOL?"
|
|
|
|
|
You just redeemed yourself!
|
|
|
|