|
|
Thanks, I am finding them interesting, but have to continue tomorrow. It's now 23h40 here and I'm going home.
Calling all South African developers! Your participation in this local dev community will be mutually beneficial, to you and us.
|
|
|
|
|
Does that guy really advocate not using properties?
I think it's a bit bizzare as i rely on properties for injections via routes other than constructors. Sure they don't get used much at higher levels, but for primitive types they are essential. Generally tho, i can't agree.
I'm guessing the guy asking the questions is trying to implement something along the same lines as the VS designer with inherited attributes for defaults, whether or not its appropriate for his domain is another thing entirely.
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Tristan Rhodes wrote: Does that guy really advocate not using properties?
It's slightly more complicated than that. If you read Holub's article I would start by saying he has far more experience and skill than I will ever have. Also his writing is far superior to mine, therefore my attempt to clarify his words seems silly but here goes.
I work with people who think that just because the code they write has classes in it they are doing object oriented programming. Nothing could be further from the truth. That is what Holub's point is. Just because you write a class doesn't mean you are doing object oriented "design".
Holub points out that many people are doing just that. They write a class making the attributes private (data hiding) good start, but then they expose them, and worse "use them" from other classes thereby reversing the "hidden" aspect of the data.
Again the point is not to not use properties it's to wake up and try to understand the point of "data hiding", what the benefits are and why it's a "principle" of Object Oriented Design. Basically all you do when resort to using public properties is give up on that principle of OOD and fall back to procedural design. Along with that decision you incur the technical debt[^] that comes with it. The technical debt is the reason it's bad, it's not a religious argument it's a practical argument. That's why it doesn't matter if you or anyone else agrees with Holub, all that matters is reality and he has a better grip on it, when it comes to software development, than most of us.
Well that's my two cents worth and that's about all it's worth, I doubt it is helpful to you or anyone else.
|
|
|
|
|
Ok, i think i get it, but i need some concrete examples.
For instance - the class below has a reasonable use of properties.
<br />
class Validator<br />
{<br />
<br />
<br />
public bool _Valid<br />
{<br />
get{return _isValid;}<br />
}<br />
}<br />
However, if you have the following, you are breaking the encapsulation of the IsValid value, and moving what should be internal logic to outside the class:
<br />
<br />
class Validator<br />
{<br />
<br />
<br />
public bool _Valid<br />
{<br />
get{return _isValid;}<br />
set{_isValid = value;}<br />
}<br />
<br />
<br />
}<br />
<br />
(Ok, i REALLY had to think for that second example)
Does that sound about right?
T
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Tristan Rhodes wrote: Does that sound about right?
Realize this issue is about Object Oriented Design. For most of us we are students for a long time, maybe forever. Don’t expect to obtain a 100% understanding in a few days. In the concrete example you posted you have focused on the idea that exposing the data through a setter is where the design goes off track. It is an indication of a poor design, it’s not “the” problem. A more complete analysis is required. What is the purpose of the Validator class? Having a class that just determines the Boolean state and then simply exposes the state might indicate a larger design issue. Other elements of the design must be considered to know the answer.
|
|
|
|
|
I’m seeking ideas for an efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines. Currently the component app-server is selecting “TOP 1” transaction primary key from the table and inserting it into a logical locking table with an expiration timestamp and the current machine name. The “TOP 1” selection also excludes transactions where the primary key is already in the logical locking table for another physical machine where the expiration date stamp has not yet expired.
The table locks on the "logical locking" table during logical lock inserts is limiting concurrency but this is the only method I could think of to ensure each server is processing a unique row in the table.
Thanks
|
|
|
|
|
Anubis333 wrote: efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines.
What if you have already painted yourself into a corner? What problem are you trying to solve? I could guess based on your post, but if wrong, it's a waste of time.
|
|
|
|
|
If it takes a server 60 minutes to process one transaction, I could increase the transactions per hour by running 20 transactions on 20 different servers at the same time. The problem I’m trying to solve is, I'm trying to find an efficient way to allow distributed processing of independent transactions on multiple machines. The result being the same transaction is not processed more than once on the other 19 servers. In short, the original question I asked is the problem I’m trying to solve.
|
|
|
|
|
Anubis333 wrote: I'm trying to find an efficient way
Anubis333 wrote: The table locks on the "logical locking" table during logical lock inserts is limiting concurrency
So there is an efficiency problem with your current solution?
Anubis333 wrote: If it takes a server 60 minutes to process one transaction
It's difficult to believe that any time required for the database to execute table locks could significantly deteriorate a 60 minute transaction?
|
|
|
|
|
The 60 minutes thing was a hypothetical situation to help define what I was trying to accomplish, it had nothing to do with my current solution. I wasn’t seeking critiques on my current solution only ideas for a better/different method of achieving the same goal. As always, I’m getting responses on everything except the question at hand.
The question at hand being, "I’m seeking ideas for an efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines"
|
|
|
|
|
Hi,
I have a database used for storing information about our machines. I have tables for machine configuration, utilisation history, schedules etc.
I'm currently working on a tool an extension to the database where supervisors will be able to open and view the schedule for a specfic machine, make changes, and save it.
The problem is that obviously I don't want two different users to be able to open a schedule at the same time(or both could save and overwrite each others changes), so I need to somehow lock the schedules.
What i've thought of doing is including a Lock table, that contains uniqueId's of locked schedules, so before opening a schedule, the lock table is checked and if the schedule is locked, the open is cancelled. Or if the schedule is free, a lock row is inserted and the user is allowed to open the schedule. Problem is, what happens if a user's PC loses power while they have a schedule open. That would mean that the schdeule would remain locked forever. So next i considered adding an ExpiryTime to a lock. So if a user crashed out, the lock would expire after a few minutes. But that means that the app will have to constantly refresh the lock to stop it expiring while the use has the schedule open. I'm not too keen on this idea either. What if the users pc just slowed up and they couldn't refresh the lock in time. I'd then have to handle kicking them out and they'd lose any changes.
The other problem is that we also have some wireless laptops, so sometimes, connections to the database are intermittant. How can I implement this so if the connection is lost for a minute or two, the users work isn't?
Is there any standard pattern for implementing this kind of thing? It must be a common requirement. Can anyone suggest any ideas or point me in the right direction. Has anyone implemented this kind of system before and can suggest a proven technique?
Alternativly, is there functionality within SQL server to implement this kind of thing. I'm not just talking about row locking as a schedule will consist of many jobs, each consisting of many products and the quantities required of each product, so a "schedule" will span multiple tables.
Thanks
Simon
|
|
|
|
|
Hi there !
First all sorry about my little english. My native language is spanish.
Well, reading carefully your post, seems You must to add support for Managing Locking and Concurrency.
(Follows fragments were taken from the Microsoft Best Practices Series)
...
Managing Locking and Concurrency
Some applications take the “Last in Wins” approach when it comes to updating data in a database. With the “Last in Wins” approach, the database is updated, and no effort is made to compare updates against the original record, potentially overwriting any changes made by other users since the records were last refreshed. However, at times it is important for the application to determine if the data has been changed since it was initially read, before performing the update.
Data access logic components implement the code to manage locking and concurrency.
There are two ways to manage locking and concurrency:
Pessimistic concurrency. A user who reads a row with the intention of updating it establishes a lock on the row in the data source. No one else can change the row until the user releases the lock.
Optimistic concurrency. A user does not lock a row when reading it. Other users are free to access the row in the meantime. When a user wants to update a row, the application must determine whether another user has changed the row since it was read. Attempting to update a record that has already been changed causes a concurrency violation.
Using Pessimistic Concurrency
Pessimistic concurrency is primarily used in environments where there is heavy contention for data, and where the cost of protecting data through locks is less than the cost of rolling back transactions if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records.
Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time.
Using Optimistic Concurrency
Optimistic concurrency is appropriate in environments where there is low contention for data, or where read-only access to data is required.
Optimistic concurrency improves database performance by reducing the amount of locking required, thereby reducing the load on the database server.
Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.
Testing for Optimistic Concurrency Violations
There are several ways to test for optimistic concurrency violations:
Use distributed time stamps. Distributed time stamps are appropriate when
reconciliation is not required. Add a time stamp or version column to each table in the database. The time stamp column is returned with any query of the contents of the table. When an update is attempted, the time stamp value in the database is compared to the original time stamp value contained in the modified row. If the values match, the update is performed and the time stamp column is updated with the current time to reflect the update. If the values do not match, an optimistic concurrency violation has occurred.
(end MS Article)
I have used datetimes instead of time stamp or version columns.
My two cents.
Hope this helps a bit.
cmf a little DBA.
|
|
|
|
|
cmf,
Thanks for that. Do you have a link to where thats from?
What I want to do is pessamistic locking. Optimistic locking is no good as it would be unacceptable for my users to lose their work if a Concurrency Violation occured when they tried to save.
That document says that pessamistic locking is not possible in a disconnected enviroment (like mine). Unfortunately, what I want is pessamistic locking in a disconnected enviroment! I think my idea of using an expiry time on the locks should work well enough for my requirements, as i know that allthough it's disconnected, it will never be a long loss of connection.
If anyone can think of better solutions though, I'm open to suggestions.
Simon
|
|
|
|
|
Hi once again...
Sure You can take a look at
http://msdn2.microsoft.com/en-us/library/ms978496.aspx[^]
It´s from:
Building Distributed Applications
Designing Data Tier Components and Passing Data Through Tiers
Patterns & Practices
Also i found this one source would be interesting:
http://www.agiledata.org/essays/concurrencyControl.html[^]
I´m start reading also there to refresh some concepts...
I see also You need to manage an special lock on resources.
Take in mind that pessimistic locking affect scalability.
But perhaps the solution deserves it.
Hope thoose reading help a bit.
cmf.
a little DBA !
|
|
|
|
|
Thanks for the help.
I've read those links now, and I've got a clearer understanding of how I can implement a solution that fits my requirements. They really helped.
Thanks a lot.
Simon
|
|
|
|
|
Good, nice to heard that.
|
|
|
|
|
HI Guys -
I'm working on an application that has a number of discrete sections to its user interface. These sections are defined in terms of what they display, what methods can be performed on them, and what events are raised.
Ideally, i'd like to put all the discrete sections into standalone controls, which can be dumped into the application shell and managed using an event driven observer model.
The problem i am facing is that in some instances i am using Trees, and i'm trying to achieve the following:
* Have a hierachial instance of objects that represent my data.
* Traverse and manipulate the data tree
* Keep the TreeViews synchronised with this data
* Hide the TreeViews / TreeNodes from any calling code (It's already wrapped in a composite control).
Are there any good patterns to do this? Has anyone encountered this kind of problem before, and have experience solving it?
Cheers
Tris
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Tristan Rhodes wrote: Are there any good patterns to do this?
Yes. Most of these patterns can found in the GOF book. Of course you should start with a MVC design and then add other patterns into that design like the Command pattern for example. With these patterns the UI components are completely isolated from the work the application does and become observers of events triggered by the operations of the system so they can maintain a valid display to the user.
Does that sound like what you were asking about?
|
|
|
|
|
I think i made a bad decision at the start of the app. As i had no stand alone tree structure that raises change events, i implemented a derived tree and handled all the change events via the host form. Unfortunately, now that i have 3 seperate views of the same data, it would be more desirable to have the data seperate and give a handle to each of the controls to make it self synchronising (Via a property or something).
Would this be a good way to go? Or should the model be linked to the Observer, and that keep things synchronised with the views?
Additionaly, is it common practice to nest observers? i.e. Would you have an observer to keep three tree views and a tree model synchronised, then another observer to manage the Tree Model + the rest of the application?
Just playing with ideas on where to go in the future.
Cheers
Tris
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Tristan Rhodes wrote: Or should the model be linked to the Observer, and that keep things synchronised with the views?
Do you know MVC? If not start with the Wikipedia page for it. In MVC, Views subscribe to notifications from the Model.
Tristan Rhodes wrote: Additionaly, is it common practice to nest observers? i.e. Would you have an observer to keep three tree views and a tree model synchronised, then another observer to manage the Tree Model + the rest of the application?
Not sure what all that means, perhaps you are overusing "tree". There is no reason to be specific about User Interface Tree Control. Just generalize that to Views, period. What each view does to manage any controls it may own is it's business.
When the user interacts with the UI, the UI handler passes the message to the Controller in MVC. It is the Controller that coordinates that message into whatever system services pertain to that operation. Any one event may result in Model notification messages being sent. Using that design is how you might synchronize different views. Also other patterns can be used to implement synchronization like an extended Command Pattern to contain command state information.
Does that help?
|
|
|
|
|
Hi Mike,
Thanks, yes, that helped. I've been doing some more research into MVC and guess i originally misinterpreted it. I've now separated the model from the view and the controller is the host form.
So what i have ended up with is:
* A Generic Tree that raises events on change and can be serialized
* A composite control set that takes an instance of the above Tree and observes it, keeping synchronized. This custom control also raises user events.
* A Controller that handles all events from all controls, and manipulates the Generic Tree; cascading changes to any observing views.
That seems to be a really nice way of doing things, and I'm really happy with it. I feel as though the penny has finally dropped for that particular way of thinking.
Cheers
Tris
Question: Does the fact that the composite control also raises user events change its role in anyway? Or is that perfectly acceptable?
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Hello,
For this project I'm going to develop, I have to implement the base structure of a system that holds data taken from 3D files.
The basic structure I've implemented is:
class 3DPoint
{
//Point data
}
class 3DLine
{
List<3DPoint>...
// class methods
}
class 3DModel
{
List<3DLine>...
// class methods
}
So far this works well, and it suits my needs.
Now I want to derive these classes to allow drawing with OpenGl, because I need some specific fields like "visibility, selected, ...", and that's when I ran into some troubles, because when I derive 3DModel (gl3DModel), I'm forced to use the class 3DLine, and not the derived class (gl3DLine), and the same with 3DPoint.
Does any of you have any suggestion on how to do this?
Thanks!
|
|
|
|
|
I think you should seperate the functionality from the data objects, similar to the way you would seperate database records from the actual database provider logic / queries.
Instead, implement a renderer base class that takes the above primitives, then derive that for OpenGL / D3D.
That way you could do:
Renderer r = new D3DRenderer();
r.Render(myModels);
Hope that helps
Tris
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Hi, my company is installing the BEA SOA platform, and the legacy platform is DNA. So, we have to make calls to service registry to discover endpoints, the service registry implements UDDI.
I can study the UDDI interfaces and encapsulate the calls, but I'd like to know if there is any component ready to use.
Thanks for any tip.
|
|
|
|
|