|
Hi, im new to "via internet communication". I have an experience in VB.Net application development and some embedded device like arduino and raspberry pi. I just want to learn how can i transfer data from my .net application to my embedded device. What i have done before,
1.i registered for free web hosting that support PHP and SQL database.
2. wrote PHP code that will receive data from .net application and store the data to sql database.
3.develop my .net application that will parse the data to the PHP in the webserver.
4. write code for my embedded device, where it will retrieve the data that has been stored in the SQL database.
by doing this, it looks like my application is "talking" to my embedded device. So i would like know, is there any smarter way i can achieve my objective?
|
|
|
|
|
It largely depends on what interfaces you have available at your embedded device. If it can connect to your PC by serial or USB, then you have serial communication. If by Bluetooth/WiFi/network then you could have socket connection.
|
|
|
|
|
Can you define "smarter"? Does that mean sending less bits, keeping the connection open for a shorter time, what?
If your communication is message-based, then yes, http would be a nice protocol. It is well defined, documented and tested. If you need to stream, I'd go for a simple socket.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Introduction
For a project i'm working on, i have to find a good way to store what actor information's are known to another actor. And it's starting to get a bit over my head.
Model-Classes
EntityModel - Base class for all entities
using System;
using System.Xml.Serialization;
namespace Relink.Data.Model {
[Serializable]
public class EntityModel {
[XmlIgnore]
public Int32 EntityID {
get;
set;
}
#region XML Wrapper
[XmlElement("EntityID")]
public String XmlEntityID {
get {
return this.EntityID.ToString();
}
set {
try {
this.EntityID = (Int32)Int32.Parse(value);
} catch {
throw new NotImplementedException();
}
}
}
#endregion
}
}
ActorModel - Base class for all actor's
using Relink.Data.Enum;
using System;
using System.Xml.Serialization;
namespace Relink.Data.Model {
[Serializable]
public class ActorModel : EntityModel {
public String Firstname {
get;
set;
}
public String Lastname {
get;
set;
}
public String Nickname {
get;
set;
}
[XmlIgnore]
public Gender Gender {
get;
set;
}
[XmlIgnore]
public DateTime Birthday {
get;
set;
}
public String Emailadress {
get;
set;
}
public String City {
get;
set;
}
public String Country {
get;
set;
}
#region XML Wrapper
[XmlElement("Gender")]
public String XmlGender {
get {
return this.Gender.ToString();
}
set {
try {
this.Gender = (Gender)System.Enum.Parse(typeof(Gender), value);
} catch {
throw new NotImplementedException();
}
}
}
[XmlElement("Birthday")]
public String XmlBirthday {
get {
return this.Birthday.ToString();
}
set {
try {
this.Birthday = DateTime.Parse(value);
} catch {
throw new NotImplementedException();
}
}
}
#endregion
}
}
Description
Basically i need to store what information's each actor has about another actor, similar to a boiled down facebook. The problem is, this has to scale for a couple hundred actors at once, i.e. this could get out of hand pretty fast.
Question
Is there a decent way to accomplish something like that, or do i have to boil the whole idea down a bit and make it just a "who know's who" sort of thing?
|
|
|
|
|
Are the shown ActorModel's properties the information in question? If yes, exclusively? If not, what else?
- Sebastian
|
|
|
|
|
The properties within the EntityModel and ActorModel are the information's in question. Derived from from the ActorModel are several Classes with additional information, but for "simplicity" I've omitted those children classes.
I've thought of converting those properties to classes and include a List inside that store the EntityID of actors who know this specific information, but i'm not sure how this will work out on a larger scale.
|
|
|
|
|
Since the amount of access-restricted elements (properties in this case) is restricted and known at design-time it shouldn't be overly complex. In the database you could represent it as a table with the EntityIDs of the property-owning Actor and property-accessing Actor as a compound primary key and boolean "is-Property-Access-allowed" columns. Outside the database you could map that to a dictionary with a Tuple<PropertyOwningActorEntityID, PropertyAccessingActorEntityID> as key (I would subclass the Tuple-class though and map the "Item1" and "Item2" properties to more expressively named properties). Could that be a solution for you?
|
|
|
|
|
Well, it really looks like that the database approach is the right thing to do and since i'm very familiar with databases, it shouldn't be a problem to adapt the code to that.
Thank you.
|
|
|
|
|
I don't have any experience with EF; but it looks like it is going to be serialized to XML?
A couple of hundred records would be very easy with a database. You'd simlpy have an Actors table, and a table that links an actor to multiple others.
Daniel Lieberwirth (BrainInBlack) wrote: public String XmlBirthday If it is a data, then pass it as a date, and not as a formatted string.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Hmm, a database never has come into my mind, but would be good solution. Have to think about it and let it cook in my head for a while.
Eddy Vluggen wrote: Daniel Lieberwirth (BrainInBlack) wrote: public String XmlBirthday If it is a data, then pass it as a date, and not as a formatted string.
That's in there to avoid problems with XML deserialization.
|
|
|
|
|
If I was going to implement a "who knows who", I would use a "relationship" entity.
Conceptually, this would like: [Actor]<--->>[Relationship]<<--->[Actor]
The "relationship" entity could include "intersection" data describing the relationship: e.g. father-son; brother-sister; etc.
Physically, this is implemented as a recursive database structure:
|--->[Actor]--->>[Relationship]---|
| |
|---------------------------------|
For example:
Actors
------
ID: 1; Name: Joe
ID: 2; Name: Jane
ID: 3; Name: Billy
ID: 4; Name: Sue
Relationships (ID1; ID2; RelationDesc1; RelationDesc2 ...):
-------------
1; 2; Husband; Wife
1; 3; Father; Son
3; 4; Brother; Sister
etc.
ID1 and ID2 are foreign keys (which could be concatenated to create a unique "relationship" key; or one could use a separate ID but then one would need to insure there are no duplicate relationships).
The rest is "intersection data" (i.e. it only has meaning in the context of a relationship).
|
|
|
|
|
That's actually i pretty good idea, since the initial idea will probably be dropped because it will get to complicated in the context of the game.
Your solution is very simple, but allows for RelationshipType access. I.e. husband and wife probably know everything about each other, where a colleague probably just knows the name and maybe the birthday.
|
|
|
|
|
Something I forgot to mention...
The implementation of a relationship that I suggested above (i.e. [1; 2; Husband; Wife]) using a single record, is somewhat asymmetric.
Although a little bit more work to implement, but more symmetrical (and therefore easier to query), is to implement a "bi-directional" relationship using 2 records; e.g.
Actors
------
ID: 1; Name: Joe
ID: 2; Name: Jane
Relationships
-------------
From ID: 1; To ID: 2; Intersection Data: Husband; Wife
From ID: 2: To ID: 1; Intersection Data: Wife; Husband
Both records should have an identifier that would identify them as being part of the same relationship; that is, Husband-Wife and Wife-Husband. This make it easier to maintain in the future (update; delete).
Conceptually, you now have:
[Actor]<---\ /--->[Actor]
| \ / |
| X |
| / \ |
[Wife]-----/ \---[Husband]
And physically:
[Actor]--->>[Relationships]
(It's been a while for me, but this is in fact how IBM's IMS-DB implemented bi-directional relationships).
On the other hand, you could "explode" the single relationship record into 2 records at "run-time" using SQL; if that was easier (probably).
modified 5-Mar-15 5:37am.
|
|
|
|
|
Thanks for the hint, but i solved this already
The model stores both actor id's separately, i just have to check if valueA or valueB contains the id of the current actor. A specific id for the relation entry is not required.
|
|
|
|
|
Just an update for those who are reading along:
I've ditched the database approach, for user maintainability. Everything will be stored within a zip file, that can easily be manipulated by the user and if they crew things up, they can use a backup that is generated alongside the main file.
The performance could be a problem though, but I'll probably use a onetime load anyway, so the only concern could be memory usage, but that's a problem that could be fixed if it occurs.
Greetings Daniel
|
|
|
|
|
For the generation of primary keys in n-Tier applications I only ever see two strategies: Either client-side generation of a GUID or a temporary Integer that gets replaced by the DAL and the DAL reports back to the client the value that was actually assigned.
My alternative idea is to let the client request 1..n new primary key value(s) from the DAL whenever it needs one/some. This way I wouldn't have to cope with ugly GUID's (I don't plan for DB-merge-ability) and I avoid awkward client-logic for replacing temporary primary keys.
Have I simply not yet found some projects that do it this way or is there some flaw in that strategy that I'm not aware of?
- Sebastian
|
|
|
|
|
I'd say this would highly depend on the philosophy of the DAL. For example why would you need to generate temporary primary keys. Let it be empty until a value is returned from the database.
If the referential information isn't seen from the object hierarchy and you need a value for foreign keys until they are actually created, why not use HashCode in such situations. Even better, the object could return the hashcode as a primary key until the actual primary key is generated in the databsae.
In all cases I'd always let the database generate the surrogate key value, never the DAL layer.
|
|
|
|
|
Thank you for your response Mika!
Mika Wendelius wrote: In all cases I'd always let the database generate the surrogate key value, never the DAL layer. What I meant is that the client sends a request for key-provision to the DAL which then lets the database generate it through sequences.
<small>Mycroft Holmes</small> wrote: Listen to Mika, let the database generate the primary keys, you may need to submit in sequence so you get the PK to be used as a foreign key on the children. The application for which I will use the framework I'm designing here first has a use case where usually about 1000 entities plus sub-entities are created. I don't want to make that many remote calls for performance reasons.
Mika Wendelius wrote: If the referential information isn't seen from the object hierarchy and you need a value for foreign keys until they are actually created Yes, I need some value for foreign keys - object references are "virtual" through keys, to facilitate lazy instantiation and cache expiration/GC. Not sure if I'm missing something but I see some problems with the HashCode approach: In contrast to Guids, HashCode-collisions are much more likely. On top of that they could collide with pre-existing keys. A deterministic generation of temporary keys would appear safer to me - but I would like to avoid that altogether.
- Sebastian
edit: typo
modified 26-Feb-15 4:03am.
|
|
|
|
|
manchanx wrote: I will use the framework I'm designing here first has a use case where usually about 1000 entities plus sub-entities are created.
You appear to be suggesting that you are going to construct several thousand valid entities in a client first before sending to the back end.
And the question would be...why?
I certainly wouldn't want to create a DAL and just assume that the client is going to send thousands of valid entries to me. That violates the primary purpose of constraints on a database in that it protects from programmer errors not user errors.
So you don't save anything in terms of validity.
You will still need to transport those thousands of entities to the back end. If you do it one at a time AND that is problem then your solution doesn't address that at all. If you are going to send them as a block then there would in fact be FEWER calls if you let the DAL handle the ids, since your DAL should be capable of recognizing dependencies (if nothing else pseudo ids in the block accomplish that.)
However thousands of calls, unless you intend to that every second, isn't a problem on any effective modern server as long as it is infrequent (of course modern servers can handle that many calls per second but doing that just to avoid batch handling would be silly.)
|
|
|
|
|
Hi jschell, thank you for your response!
jschell wrote: I certainly wouldn't want to create a DAL and just assume that the client is going to send thousands of valid entries to me. That violates the primary purpose of constraints on a database in that it protects from programmer errors not user errors. The entities are fully validated before they're sent to the DAL.
jschell wrote: You will still need to transport those thousands of entities to the back end. If you do it one at a time AND that is problem then your solution doesn't address that at all. No, they're sent in a batch/block.
jschell wrote: If you are going to send them as a block then there would in fact be FEWER calls if you let the DAL handle the ids, since your DAL should be capable of recognizing dependencies (if nothing else pseudo ids in the block accomplish that.) You mean there would be fewer calls because the client wouldn't have to request new keys from the DAL/DB before creating new entities? But the client can request more than one new key at once - if it's clear how many new entities are to be created beforehand, it's just one request, if it's not clear beforehand, it would still be considerably less than one request per key because it can just request increasingly more if it runs out of new keys.
jschell wrote: However thousands of calls, unless you intend to that every second, isn't a problem on any effective modern server as long as it is infrequent (of course modern servers can handle that many calls per second but doing that just to avoid batch handling would be silly.) The application will run in a variety of environments, many of which won't have a very performant server or network. So I want to design it in a way that it puts the least stress on either.
|
|
|
|
|
manchanx wrote: The entities are fully validated before they're sent to the DAL.
Again I would not write a DAL nor a database (relational) that relied solely on a client for validity.
manchanx wrote: But the client can request more than one new key at once
One is more than none.
manchanx wrote: many of which won't have a very performant server or network.
Batching it solves that problem but it doesn't explain why the client needs to provide the ids.
|
|
|
|
|
jschell wrote: One is more than none.
Batching it solves that problem but it doesn't explain why the client needs to provide the ids. I don't think I deserve your impatience with me because you could have found the explanation in another post of me in this thread:
[..] the main point why I need at least some kind of key is that I implement object-references "virtually" (don't know if there's a better term for it): Entities don't hold a direct reference to other entities but a key and on property access the key gets resolved into an object reference - that way I can easily implement lazy/implicit loading and cache expiration.
To put it into picture: I'm developing a custom ORM, mainly because of one requirement that disqualifies existing ORMs: My users need to be able to extend the model with custom tables and fields (which of course need to be "non-intrusive" on the business-logic by being nullable/optional). The first version of the application will only include desktop clients and those will be rich clients where the part of the ORM that does the record-entity mapping resides in the client. So probably you could say that I split up the DAL into tiers. This will probably clear up the following:
jschell wrote: Again I would not write a DAL nor a database (relational) that relied solely on a client for validity. The DAL I've been talking about is essentially that part of a DAL which you're thinking about that does the final step of saving the raw records. Any validation you would do in a DAL happens here in the first layer of the client.
So I need Id's/keys in the client because they're required to resolve references.
|
|
|
|
|
manchanx wrote: I don't think I deserve your impatience with me because you could have found the explanation in another post of me in this thread:..the main point why I need at least some kind of key is that I implement object-references
You do not need ids from the DAL to solve that however. All you need is a consistent implementation within each block. You can implement it with nothing more than by incrementing an integer.
Once the DAL receives it then it replaces the references with consistent database ones.
manchanx wrote: Any validation you would do in a DAL happens here in the first layer of the client.
Sounds like you are pushing work that should be in the DAL to the client.
|
|
|
|
|
jschell wrote: You do not need ids from the DAL to solve that however. All you need is a consistent implementation within each block. You can implement it with nothing more than by incrementing an integer.
Once the DAL receives it then it replaces the references with consistent database ones. I can't see the net benefit of doing that:
1) You said in a previous reply sending ~1000 requests to the DAL isn't an issue as long as it happens rarely (which I agree with) - but why would you then worry about that single request for new id's?
2) The coding effort I'm doing for id-provision to the client I'm saving in the DAL by not having to replace temporary id's.
3) Let's take a concrete example: The application will be for library management. On the form intended to loan media to a customer there might be (for convenience) a button to invoke the use-case for adding a new customer. After adding the new customer he should automatically be selected for loaning media in the first mentioned form. When working with temporary id's the client would have to requery the customer entity by his external customer-number (or whatever). When using DAL-provided new id's, that's not neccessary.
I'm not saying my planned solution is way better than the more convential solutions but you haven't yet shown why it would be worse.
|
|
|
|
|
manchanx wrote: I can't see the net benefit of doing that:
As I said, when I create DALs I expect the DAL not the client to enforce restrictions. Otherwise there is little point in having an actual DAL.
manchanx wrote: but why would you then worry about that single request for new id's?
Limiting transactions was your requirement not mine.
manchanx wrote: 2) The coding effort I'm doing for id-provision to the client I'm saving in the DAL by not having to replace temporary id's.
However, that solution requires that you now solve a different problem - how to get the ids from the database.
manchanx wrote: 3) Let's take a concrete example:
I have created many DALs in my lifetime. I was creating them before there was a term for them.
I have created several serialization protocols that required resolving references.
I have used several frameworks that used protocols that required resolving references.
So I believe I understand the problem and its ramifications.
manchanx wrote: but you haven't yet shown why it would be worse.
As I pointed out - the DAL is then going to be relying on the client for valid data. As I said before consistency verification is something that belongs in the DAL and/or the database. Letting it out of there increases the risk that the verification will, especially over time, be wrong.
Fixing stored data that is inconsistent can be difficult to solve requiring complex programming solutions and at times is programmatically impossible to fix and requires manual intervention. (I have had to do all of that at one time or another.)
|
|
|
|