|
Could you please provide some example code or refer me to a beginner's guide (on the Internet) on how to implement the client-side Filter-10-Lowest-Prices-button event handler so that it ends up calling a method in C# that then modifies the html? What are the steps necessary in between?
|
|
|
|
|
In your case, since Excel runs on Android and Windows, all you need is an Excel spreadsheet with your "100 cars" to support "roaming data"; anything else would be over-kill. Add some VBA for a "smart document".
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Ok, so you're suggesting I abandon the thought of using a webbrowser+html-file and use an Excel-sheet instead?
|
|
|
|
|
Based on your requirements, use Excel. Your initial "solution" was premature.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
It seems Excel on Android doesn't support VBA...
|
|
|
|
|
Office Scripts: The future is here
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Please correct me if I'm wrong, but Office Scripts appears to be introduced in Office 365 and to get Office 365 I would need to sign up for a subscription (I can't buy it once and be done paying money to Microsoft for Office 365)?
|
|
|
|
|
This goes beyond "design and architecture". I'll stop now.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
|
Good evening,
This is a very broad question, and if somebody deems it be moved to another board then please feel free.
I am am Filemaker developer - love the platform - has lots of great features... except the price. For me to host a database - for say 20 simultaneous clients - it will cost me (well, the client) a fortune. Cost prohibitive really for this setup.
I'd like to see what other people think of various platforms to use to draw up databases on a Linux box, and have a browser at the front end - suitable for mobile devices too.
As I say, this is a broad question, so I am open to all (polite ) suggestions.
I'm not stranger to coding and able to learn new languages.
Can I get some opinions please?
Thanks in advance.
Greg
|
|
|
|
|
You want something that matches Filemaker, and then some. I suspect few people know what Filemaker does (4GL?). You're better off stating what Filemaker does that you need (without necessarily mentioning the product at all).
That's a tool stack: like LAMP (Linux, Apache, MySQL, PHP), Windows (.NET), etc.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Good morning,
My apologies if using a product name breached the rules, but I needed a point of comparison for my requirements.
In short, I need to code a database and front-end that works on multiple platforms, and can be developed on Mac OS. I am liking the look of Python so far, some say tying that in with Flask might be a good combination - but I'm no expert in either.
Cheers,
Greg
|
|
|
|
|
You can mention a product; you just shouldn't assume people know what it does.
Filemaker runs on multiple platforms; hard to recommend an alternative when that is your requirement.
Quote: Not only does FileMaker work seamlessly on Macs and PCs, but it is also accessible on iOS and Android mobile devices and all types of Web browsers thanks to the power of FileMaker Go and FileMaker WebDirect.
As for the cost factor: pay me now or pay me later (with time learning a new development platform).
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
modified 30-Jul-20 6:34am.
|
|
|
|
|
I have sql server database insert operation is very slow. I am moving data from stage tables--> temporary tables-->Master tables --> Data warehouse tables. Loading the data from stage tables to master tables it is very slow due primary key, foreign key constraints and non clustered index.
I am trying to drop the non clustered index before loading the data but there is no use. I am trying to drop the primary key getting issue with foreign key.
|
|
|
|
|
Be very sure you are only doing and insert, not an update.
We created a stored proc that removed all indexes except the PK on the destination tables. applied the insert and then rebuilt the indexes.
CAVEAT your data MUST be clean as you lose all referential integrity checking.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
If i drop non clustered index will i any benefit on insertion? If i found matched record i am updating otherwise insert.
|
|
|
|
|
kali siddhu wrote: If i found matched record i am updating otherwise insert Right there is your problem, for every record you are checking for an existing record and when you do the update the system has to check referential integrity.
Split your data into 2 set, those to be inserted and those to be updated. OR delete the records to be updated and insert all records. This may not be viable as it will break existing RI.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
what is the name of designer program like fruity loops or another multimedia programs because I feel is not visual basic or another programing known
|
|
|
|
|
I want to create data notation (like JSON is used).
1) Is it good enough to use languages built-in features (types, notation etc), extend it (e.g. with another types), and output some JSON?
2) Or should I build it from scratch and parse all built-in features and add my additions then output it into JSON?
By using 1) I don't have to implement core things. If there are fixes - then it's good. If there are changes that I don't like I can deal with them from case to case... I guess.
Howerer I'm tied to a programming language - so users had to use the programming language (instead library).
By using 2) I have to build everything but I'm not tied to one particular language.
Maybe I can mix it ( 1) for the language, 2) for other languages).
What are your toughts on this topic.
ps. I was thinking about using the Red ( red-lang.org/ ). It's in alpha but I don't think it will change a lot.
|
|
|
|
|
Why reinvent the wheel?Both JSON and XML provide everything you could need already. And they are both supported by nearly all major programming languages.
|
|
|
|
|
JSON and XML and YAML and ... Isn't the whole bunch of them wheel reinventions? When everybody else are creating new wheels which are better suited for the purpose than all the old ones, why shouldn't I do the same?
Now I have personally come to one conclusion, in particular from many years of exposure to XML: Data description languages are for computers, not for humans. This kind of stuff you, a human, do not handle better than a computer does. You make typos, you do not structure it according to the rules, in brief: You mess it up. So keep humans out of it!
The best way of doing that is to make it unreadable. Binary. I know that is a highly Politically Uncorrect statement; yet I think that what humans should not mess up, should not be made available for messing up - especially not with as simple tool as a plain text editor. You can also mess up by using a binary generator (/editor), but that takes a lot more deliberate action. The mess comes from "You asked for it, you got it" - not from "Ooooops!"
So when I need to store data for my own applications (and there are no requirements for sharing the data files with other applications), I do it as binary files. Always in a Tag-Length-Value format, evading all sorts of escape mechanisms. No need to search for the end of the field. Arbitrary binary data. Space allocation for the value can be made before it is actually read. Parsing the file is extremely fast. The space overhead is quite moderate.
Details of how you do the TLV format may vary slightly. E.g. in some applications, there will never be more than a couple hundred distinct tags, so it is stored in 15 bits; the "sign bit" is a flag indicating that the Value is in fact a sequence of TLV values. If values are small, the length is 16 bits, too. If there is any risk at all of overflow, I use the BER style of variable length handling: The length of the enclosing TLV is 0, each member carries its own length; the member sequence is terminated by and all zero TLV. (Then you cannot preallocate space for the entire structure without reading it, but usually a composite value won't be stored as a single unit anyway.)
Like all class definitions have a ToString, they have a ToTLV. And a FromTLV. The "Schema" is represented by these ToTLV funcitons. If any other application needs data in, other formats, adding ToXML, ToJSON, ToYAML, ... alongside with ToString and ToTLV is straigtforward. But for the application's private file, the binary ToTLV is used.
|
|
|
|
|
My book discusses the advantages of TLV over text-based encodings. The latter typically require more space and processing time. Readability is touted as an advantage of text, but it's often as you say.
Text's advantage is in interoperability between big and little endian systems. If that's a requirement, TLV is a non-starter unless all the fields are the same length. A protocol standard has to consider this, but a proprietary system can standardize on one endianism and use TLV more freely, although it still has to maintain protocol backward compatibility unless it's OK to shut down the entire network during an upgrade.
|
|
|
|
|
Text representation does not completely evade endianism - at least not with UTF-16!
If you consider UTF-8 an alternative: The multibyte encoding of a code point is just a compression method for an integer. You can use that for the Tag and Length fields as well - that could save you a few bytes when tags are few and lenghts short, and it solves endianness equally well for 32 bit tags and lengths as it does for text files. I have been considering this solution, but there hasn't been any need for it yet.
You obviously still have an endiannes-issue if the value field contains any binary numeric value at all (including UTF-16 characters). A large group of application/data formats are mainly targeted at user environments where CPUs of one given endianness is dominant. Defining that as The byte order for your format, and clearly indicate to those readers / writers in the opposite endianness that they have to flip bytes (some CPUs have special instructions for that!) is, in my opinion, a far better solution than converting everything to text.
Text doesn't solve all format problems either, unless you define one of many alternate formats as The format (analogous to defining the endianness of the format). How do you represnent dates? 05/19 is unambiguous (but must be converted to e.g. ISO standard before presenting to a Norwegian user). A week ago, 05/12, is ambiguous unless the representation is explicitly defined. Time: AM/PM is virtually unknonwn in many languages/cultures. Numerics: Is 1,500 one and a half, or fifteen hundred?
Text: How do you represent characters beyond ASCII? 8859-1? 8859-x, with x specified in metadata? UTF-16? UTF-8? Maybe you will stick to ASCII and use QP, or Base64? HTML charcter entities? (named, # or either?) Backslash escapes? (hex, decimal, octal, or any of those?) URL percent-encoding? Which characters do not need to be escaped? How is newline and end of string represented - is NUL accepted as a fill byte, in accordance with ISO standards?
And so on and so on. Text representation certainly doesn't solve all problems. (I'd say that binary encoding solves more!)
In the days when I was working with ASN.1 and BER, a BER string had to be inspected using a BER reader (which should have access to the ASN.1 to provide symbolic names). The readability was a lot better than with XML! When I went from BER to XML, I was considering making a similar XML reader to make it readable; I never got around to do that.
Today, most systems for displaying plain text have some facilities for improving readability, starting with collapsing inner structures, then highlighting of tags, and so on. You could say that such functions illustrate that the plain text format is not good enough. If I need a display tool that parses and transforms XML or whatever into something readable, it might as well transform some TLV format into something readable.
There is one issue that still remains, though: How self-describing the file should be. TLV tags are usually opaque, just some integer number. When you see an XML "p" tag, you know that it may have to do with a person, a product, a paragaph or something associated with the "p" (usually as the initial letter). At one presentation of handling of arbitrary XML documents, I had a sami colleague give me Northern Sami terms for chapter, section, picture and so on, for me to use in the examples: The tags were just for illustration (something like Ipsum lorem), but for the audience to realate to this as a document was difficult
I made one TLF format a few years ago: The file contained zero or more tag name tables, providing symbolic tags for presentation purposes; each table was headed by a language code. For simplicity, in that format, tags were unique. If partial structures could have had "locally defined" tags (as allowed e.g. in ASN.1), a more complex scheme would be required, easily growing into a complete scheme representation. In this case, that would be overkill; global tags was far easier and fully acceptable.
Such issues to not arise at all with textual tags; they are at least at some level self-describing. An they rise issues of e.g. case significance, allowed character set, and a bunch of other issues that a numeric tag evades.
When ASN.1/BER was in war with other alternatives, the lack of symbolic tag names in BER, mandating the receiver to have access to the ASN.1 scheme for interpretation, was one of the strongest critisisms of BER (/DER/CER). Later, we got XML and JSON encoding rules, encoding symbolic names from the ASN.1 scheme into the stream, but this was only a half-way solution: Matching (and keeping in synchronization) an ASN.1 scheme to an XML scheme is, for all practical purposes impossible, certainly over time. So it mostly served as to poor mans BER reader
I see a lot of areas where computer guys are rather unwilling to seriously assess the commonly used solutions, asking critically if they really are the best. Textual encoding is one of those. We use it because that's the way we do it. Because textual encoding is there, not because it came out with the highest evaluation score. Sure, it is there, we have to accept it when exchanging data with others. But in "local" contexts (such as private files for an application), I tend to use other alternatives.
|
|
|
|
|
Member 7989122 wrote: I see a lot of areas where computer guys are rather unwilling to seriously assess the commonly used solutions, asking critically if they really are the best. Textual encoding is one of those. We use it because that's the way we do it. Because textual encoding is there, not because it came out with the highest evaluation score. The .NET framework works by default with UTF16. We need not think about limiting to ASCII, because we're no longer limited by the space on a floppy. Not much to gain there, and hardly worth the money for the time spent on it.
And no, you don't go back questioning the design of the screws if you're building a car. You take the industry standard, take a brief glance at other screws, and try realize there's a reason why it is the current standard.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: And no, you don't go back questioning the design of the screws if you're building a car. You take the industry standard, take a brief glance at other screws, and try realize there's a reason why it is the current standard. That is certaily true. Sometimes there are reasons for that component design that you do not realize, and if you try to "improve" it, you may be doing the opposite. When a partial solution is given, it is given.
Textual encoding may be that way, in particular when you are exchanging data with others.
But when you are not bound to one specific solution, e.g. you are defining a storage format for the private data of an application, or you have several alternatives to choos from, e.g. 8 bits text is given but you need to select an escape mechanism either for extended characters or characters with special semantics, then you should know the plusses and minuses for the alternatives.
"Because we used it in that other product" is not an assessment Yet, I often have the feeling that we are arguing like that. We should spend some of our efforts on learning why these othere alternatives were developed at all. There must be some reason why someone preferred it another way! Maybe those reasons will pop up in some future situation; then you should not select an inferior solution because "that is what we always do".
What I (optimistically) excect from my colleagues is that they are prepared to realate to the advantages and disadvantages of text and binary encoding. If they are network guys: That the know enough to explain the greatness of IP routing vs. virtual circuit routing, the advantage over layer-3 routing rather than layer 1 switching. Application developers should relate to explicit heap management vs. automatic garbage collection, use of threads vs. processes, semaphores vs. critical regions. And so on.
Surprisingly often, developers know well the solution they have chosen - but that is the only alternative they know well. They cannot give any (well) qualified explanation why other alternavtives were rejcected. I think it is important (in any field, both engineering ones and others) to be capable of defending the rejection of other alternatives as it is to defend the selected one. If you cannot, then I get the impression that you have not really considered the alternatives, just ignored them. And that is what worries me.
For UTF16: yes, that is given, as an internal working format. Yet you should consider what you will be using an external format: UTF-8 is far more widespread for interchange of text info. When is it more appropriate? If you go for UTF-16, will you be prepared to read both big- and little-endian variants, or assume that you will exchange files only with other .net-based applications? Will you be prepared to handle characters outside the Basic Multilingual Plane, i.e. with code points >64Ki?
Even if your response is: We will assume little-endian, we will assume that we never need to handle non-BMP-characters, we will assume that 640K is enough for everyone, these should be deliberate decisions, not made by defaulting.
When Bill Gates was confronted with the 640k-quote, he didn't positively confirm it, but certainly didn't deny it: He might very well have made that remark in the discussion of how to split the available 1 Mbyte among the OS and user processes. Given that 1 MB limit, giving 384 kB to the OS and 640 kB to application code should be a big enough share for the applications, otherwise the OS will be cramped in too little space. 640k is enough for everyone. - In such a context, where the reasoning is explained, the quote suddenly makes a lot more sense. Actually, it is quite reasonable!
That is how I like it. Knowing why you make the decisions you do, when there is a decision to make. Part of this is includes awareness of when there is a decision to make - do not ignore that you actually do have a choice between your default alternative and something else.
|
|
|
|
|