|
I've got the dead tree version on my shelf. It is a very good book.
|
|
|
|
|
Dave Kreskowiak wrote: I've got the dead tree version on my shelf.
Very cool that you've read it.
I am really enjoying the code samples, because they :
1. touch on very basic but very clear topics
2. are self contained & don't require a lot to get them compiled.
3. Touch on important topics like file i/o, simple threading ideas, etc.
Really amazing book so far.
|
|
|
|
|
You can get a list of the compiler Manifest Defines via
echo | cc -dM -E -
In my case, I get 442 #defines for c++, and 380 for cc. Interestingly, the C compiler and the pre-processor produce exactly the same manifest defines.
Messing around with different C/C++ standards gets different values for different standard versions e.g -std=c89 or -std=gnu17 or -std=gnu++11 , so might be worth inspecting, to see how you can tell whether you're compiling for C89, C99, or if you're using clang (#define __clang__ ), etc
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
Very cool. Thanks for sharing
|
|
|
|
|
In LVGL you can set an image source to either be a file path or a structure.
lv_obj_t* ui_img = lv_img_create(ui_screen);
lv_img_dsc_t img_dsc;
img_dsc.header.always_zero = 0;
img_dsc.header.cf = LV_IMG_CF_RAW;
img_dsc.header.w = 800;
img_dsc.header.h = 480;
img_dsc.data_size = 800*480*LV_COLOR_DEPTH/8;
uint8_t *img_mem = (uint8_t*)ps_malloc(img_dsc.data_size);
img_dsc.data = img_mem;
memset(img_mem,0,img_dsc.data_size);
lv_img_set_src(ui_img,&img_dsc);
That's one option.
Here's another
lv_obj_t* ui_img = lv_img_create(ui_screen);
lv_img_set_src(ui_img,"A:/minou_480.jpg");
The lv_img_set_src() function takes a void* for the second argument and either accepts an instance of a lv_img_dsc_t structure or a string!
Worse, there's no lv_img_dsc_init() function to set the struct to a known state (with for example, a magic cookie in it that can be used to flag it as the structure rather than a string)
Ultimately here's how it checks:
if(u8_p[0] >= 0x20 && u8_p[0] <= 0x7F) where u8_p[0] is the first byte of the source argument.
This is in battle tested production code used in many many devices in the real world.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
img_dsc.header.always_zero = 0; I have no idea of the layout of the struct, but for example if this is at the start, it would discriminate it from a valid string, would it not? Then the range check would separate empty strings from real ones.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
It is at the beginning of the struct and it is used as a discriminator.
And yet it seems extremely accident prone, and I should add the docs do not clarify this.
At worst there should be, IMO, a function to initialize the structure.
At best, there should actually be two lv_img_set_src() functions - one for paths, and one for lv_img_dsc_t structs.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Given
#include <string.h>
void swap(char foo[10][20], int i1, int i2)
{
if(i1 != i2)
{
char buff[20];
strcpy(buff, foo[i1]);
strcpy(foo[i1], foo[i2]);
strcpy(foo[i2], buff);
}
} when compiling with any optimization level above -O0, gcc complains about the second strcpy, saying
warning: ‘strcpy’ accessing 1 byte at offsets [-4611686018427387904, 4611686018427387903] and [-4611686018427387904, 4611686018427387903] overlaps 1 byte at offset [-4611686018427387904, 199] [-Wrestrict] clang doesn't complain, even at -O3. I think what gcc is trying to tell me is that there is an issue when i1 == i2 , even though there's a test to not swap if i1 == i2 ! If I replace i1, i2 with integer values (eg. 1, 2) the warning goes away! Weird.
So far, the only way I've found to stop gcc from issuing a warning is to use memcpy instead of strcpy . Though both have restrict qualifiers to both arguments.
Or maybe I'm missing something?
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
GCC's warnings are often way too strict (and in some cases, outright wrong).
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|
Has anyone ever seen a DB stored procedure where the queries weren't written using SQL, but rather using string concatenation to build the queries, and then submitting the queries with the EXEC command?
I can't think of a more horrifically error-prone way of coding in T-SQL. But I see this from a prominent maker of Warehouse Management Systems.
The first time I ever saw that, I was taken aback because I couldn't understand why anyone would do that. But then I saw that it was done so that the queries could be written to target different databases, tables, and columns depending upon the values of local variables.
So even something as weird and wonderful as that has a use. I wonder if any other RDMS has a more elegant solution?
EDIT:
I realize that the queries could still be written using regular T-SQL, but in a more structured way. But I see that this strange practice waas done for the sake of simpler, straight-line code.
The difficult we do right away...
...the impossible takes slightly longer.
modified 30-Jun-24 16:00pm.
|
|
|
|
|
Richard Andrew x64 wrote: So even something as weird and wonderful as that has a use.
i think you are being far too generous & kind.
It's a terrible idea for numerous reasons. I know little about DBs & DBMS but there are a couple that seem glaringly wrong to me:
1) one of the big ideas of a SP is that it is precompiled. This build-string-query then exec would insure that wouldn't be true. Think about that. A SP is precompiled & knows the "execution path" but in this case that would never be true, so it makes entire no sense that this "dynamic" thing would be a SP.
2) built in query analyzer of SQL Server would not know what the execution would be & this probably causes performance issues & the inability to know if it is slow or not since the query is built on the fly
I'm going to assume that some dev with little experience got this "genius" idea for how to create dynamic queries and no one ever looked at it because "it works".
SQL Server is an amazing feat of true Engineering and will fix things for you so the dev is probably getting really lucky.
Plus hardware is probably handling this.
And probably if the thing really got serious traffic it would bog down to nothing.
Just another Lucky Dev -- they're 92% of all Devs anyways.
"That's not coding, that's typing."
|
|
|
|
|
raddevus wrote: built in query analyzer of SQL Server would not know what the execution would be & this probably causes performance issues
Actually, the opposite is more likely to be true.
If you have a single query with lots of conditional filtering based on the parameters - eg: (@x Is Null Or T.X = @x) - you'll get one execution plan based on the first set of parameters used, which can be sub-optimal for a different set of parameters.
Having a different query for each set of applied filters can allow the query optimiser to select the "best" execution plan to satisfy that set of filters.
You may end up with some query execution plan cache bloat, and very complicated queries might take slightly longer for the first compilation. But you may still end up with better performance.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I have done that very rarely, such as when the name of a table isn't known. For instance, writing a procedure which will di a TRUNCATE TABLE but fallback to DELETE if that fails.
Definitely not as a normal course of action.
Further, I have not trusted the input, but rather checked it against the sys.objects or similar table to be sure it is a reasonable value before proceeding.
|
|
|
|
|
They are used where I work. They are usually performing a reporting task. Building it dynamically allows changes to the actual select clause, as well as sorts, groups etc. They are a nightmare to debug...
|
|
|
|
|
Shane0103 wrote: They are a nightmare to debug...
|
|
|
|
|
Leaving aside the performance issues noted by @raddevus, this is a terrible security risk. While the legitimate code uses this e.g. to query the client table, what is stopping malicious code from querying the credit card table?
(Yes, I know that credit card Nos. should not be stored like that, but many databases do in order to provide a rolling subscription to their site.)
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Richard Andrew x64 wrote: submitting the queries with the EXEC command?
For SQL Server, they should at least be using sp_executesql[^], and passing the parameters as parameters rather than concatenating them into the string.
In some rare situations, it may be worth doing this - for example, if your procedure has a lot of optional filters, building a query that only specifies the ones being used will allow the DBMS to select the most appropriate execution plan for the query. If you put them all in the same query - eg: (@x Is Null Or T.X = @x) - then the execution plan will be selected based on the first set of filters provided, which may be sub-optimal for a different set of filters.
But passing the string to EXEC rather than sp_executesql means they're introducing a SQL Injection[^] vulnerability into the code, which far outweighs any performance benefits.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
This type of crap used to be all the rage in the late 90s/early 00s. I'm pleased to say I haven't seen abominations like this in the last 20 years or so.
|
|
|
|
|
Well, it's kinda sorta DIY ORM isn't it?
Yeah I've seen that in a bunch of stuff. It'd be better to have that all live in sprocs but sometimes there are edicts harder to argue with (all logic must be code side, none in DB) than just to work around.
|
|
|
|
|
Yeah, seen it too.
In fact, it was the default for a project I worked on.
The idea was that you could filter on something like ten to twenty fields and depending on which fields were set, the string concatenation added fields to the WHERE-clause.
The alternative was something like WHERE (X = @X OR @X IS NULL) AND (Y = @Y OR @Y IS NULL) AND (Z = @Z OR @Z IS NULL) -- Etc.
Nowadays I'd use LINQ to build such a query.
|
|
|
|
|
The intellisense for C++ in VS Code - at least when it works, is nothing short of incredible.
template<size_t BitDepth>
using bgrx_pixel = pixel<
channel_traits<channel_name::B,(BitDepth/4)>,
channel_traits<channel_name::G,((BitDepth/4)+(BitDepth%4))>,
channel_traits<channel_name::R,(BitDepth/4)>,
channel_traits<channel_name::nop,(BitDepth/4)>
>;
using rgb18_pixel = pixel<
channel_traits<channel_name::R,6>,
channel_traits<channel_name::nop,2>,
channel_traits<channel_name::G,6>,
channel_traits<channel_name::nop,2>,
channel_traits<channel_name::B,6>,
channel_traits<channel_name::nop,2>
>;
What you're looking at is two arbitrarily defined pixels. One is N-bit pixel where 3/4 of the bits are used, and the second example is a 24-bit pixel where 18-bits are used.
That's not really important, but the channel names are, because consider this:
rgb18_pixel::is_color_model<
channel_name::R,
channel_name::G,
channel_name::B>::value
If you know C++ you can tell there's metaprogramming magic here. What I'm doing is querying a "list" of channel traits at compile time looking for ones with particular names.
The thing is, if you hover over value, the extension will resolve it to true in the tooltip that pops up - no easy feat.
More impressive even is this:
using color18_t = color<rgb18_pixel>;
auto px = color18_t::gray;
It will determine the actual numeric value for that color and display it when you hover over "gray"
(2155905024)
You think that's easy? No.
constexpr static const PixelType gray = convert<source_type,PixelType>(color<PixelType>::source_type(true,
0.501960784313725,
0.501960784313725,
0.501960784313725));
Notice it's running a constexpr function convert() to get the destination pixel format. This is a non-trivial function.
So one of two things is happening here.
Either the C++ extension for VS Code has a compliant C++ compiler front and middle built in (I suspect it does) or it is managing to link itself to existing compilers like GCC tightly enough to determine this output (which doesn't seem possible to me)
Either way, go Microsoft.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Totally agree.
And the Copilot has helped me greatly with boilerplate code.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
I haven't messed with Copilot. I'm "AI" averse, and will probably remain so until they get better. I like Visual Studio's AI integration because it's explicit - you have to smash tab at each step and it shows you what it will do next. It's important because it's so often wrong.
I'm not sure if Copilot works like that or something else, but honestly, I can pretty much think in C++ at this point, so it's almost more effort to have to prod an LLM to give me the code I want. By the time I do I could have figured it out with going from C++ to English to English to C++.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Yes, that's the same way that Copilot works.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Interesting, thanks. I might tinker with it, if it's free.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|