|
Before I submit this Tip thing, I wanted to know among people that have used the codeDom, if they'd prefer something like this?
var result = CD.Method(typeof(bool), "MoveNextInput");
var input = CD.FieldRef(CD.This, "_input");
var state = CD.FieldRef(CD.This, "_state");
var line = CD.FieldRef(CD.This, "_line");
var column = CD.FieldRef(CD.This, "_column");
var position = CD.FieldRef(CD.This, "_position");
var current = CD.PropRef(input,"Current");
result.Statements.AddRange(new CodeStatement[] {
CD.If(CD.Invoke(input,"MoveNext"),
CD.IfElse(CD.NotEq(state,CD.Literal(_BeforeBegin)), new CodeStatement[] {
CD.Let(position,CD.Add(position,CD.One)),
CD.IfElse(CD.Eq(CD.Literal('\n'),current),new CodeStatement[] {
CD.Let(column,CD.One),
CD.Let(line,CD.Add(line,CD.One))
},
CD.IfElse(CD.Eq(CD.Literal('\t'),current),new CodeStatement[]
{
CD.Let(column,CD.Add(column,CD.Literal(_TabWidth)))
},
CD.Let(column,CD.Add(column,CD.One))))
},
CD.IfElse(CD.Eq(CD.Literal('\n'),current),new CodeStatement[] {
CD.Let(column,CD.One),
CD.Let(line,CD.Add(line,CD.One))
},
CD.IfElse(CD.Eq(CD.Literal('\t'),current),new CodeStatement[] {
CD.Let(column,CD.Add(column,CD.Literal(_TabWidth-1)))
}))),
CD.Return(CD.True)),
CD.Let(state,CD.Literal(_InnerFinished)),
CD.Return(CD.False)
});
I know it looks like hell, but it's so much less verbose than using the raw object model
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
meh
#SupportHeForShe
Government can give you nothing but what it takes from somebody else. A government big enough to give you everything you want is big enough to take everything you've got, including your freedom.-Ezra Taft Benson
You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun
|
|
|
|
|
Excuse me while I'm going to spoon my eyes out with a blunt object
Can you maybe break it up into multiple lines?
Perhaps use multiple descriptive functions to build up the model?
Maybe even split it up in classes?
Something like:
CD.AddStatement();
var ifStatement = CD.CreateIf(CD.NotEq(state, CD.Literal(_BeforeBegin)));
var elseStatement = CD.CreateElse(CD.Whatever);
CD.AddIfElse(ifStatement, elseStatement);
AddSomeStatement(CD);
CD = CD.AddStatement();
CD = CD.AddIfElse(ifStatement, elseStatement);
CD = AddSomeStatement(CD);
No idea if that would work, just spewing idea's here.
I did something like that once.
Maybe this will give you your "Eureka!" moment, maybe it won't.
|
|
|
|
|
I used to do something like that, and then I ran into issues with it. It either got stale or got confusing.
The thing about CD is it's actually CodeDomUtility , it's just I use a using to abbreviate it.
That's all static methods that simply create code dom objects. I will eventually make a visitor for examining and modifying existing trees (i've had lots of good luck with that in the past)
My point is it's not a builder class. If I were to make one of those I'd probably call it CodeDomBuilder , but there's a lot of problems with state management with a codedom like that.
In the end, I've found it most expedient to declare everything inline, like I did.
As messy as it is, you actually get used to reading it, and it's pretty understandable once you do, as long as you're indenting.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
honey the codewitch wrote: CodeDom users, what's your opinion? I like it without... it feels more natural
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
var @this = new CodeThisReferenceExpression();
Feels natural?
I prefer CD.This
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
OK.. it looks like the pun was no that obvious
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
sorry. i guess i'm slow. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
CodeDom --> |Coud-Dom| ~= |Con-Dom|
Q: CodeDom users, what's your opinion? --> Condom users, what's your opinion?
A: I like it without... it feels more natural.
I suppose my bad english pronunciation has played against my "obvious" joke
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
using the codedom always has me worrying that it will break. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
See? It was not that difficult
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Please, no "mocking" of the question or questioner.
At several places where I have worked, there were those who made a common practice of using mock data in their unit tests, and those who made a practice of integrating a test database into their unit tests.
The former draw a clear distinction between unit testing and integrated testing. Where there are multi-tiered objects (e.g. controller, services, repositories, etc.) each level gets tested. The theory is that each object is tested independently, so however and by whatever it is used, it will succeed. Integrated testing, with an actual database, is the next step of testing.
The latter see what is being tested as a connected group of systems, so to them integrated testing is part of the process of unit testing. The theory seems to go: Test the outermost connection point (the exposed Web API method) which in turn tests the objects and methods down the stack, as well as the connections between them. That includes using a test database (and test services for 3rd party APIs) so that part of the connected system is tested. The end result they seek is that when the outermost connection point is used, it and everything below it has been tested.
I see a good reason to use either approach, depending on what I am testing. I do have preferences, but I am interested in what this community thinks about using mock data?
Thank you in advance.
|
|
|
|
|
MSBassSinger wrote: The end result they seek is that when the outermost connection point is used, it and everything below it has been tested. That's a shortcut.
You mock so that each thing is tested individually so you get an immediate indication what in that stack is responsible for the fail. If you take the shortcut, you probably spend some extra time debugging when a fail eventually occurs.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
IT DEPENDS.
low level unit tests can use mock data, and probably should (imo); high level unit tests can use mock data and can use real-world data.
Integration tests should use real-world data, as close as possible to data used by the clients.
I'd rather be phishing!
|
|
|
|
|
This is not an either-or situation.
A test that connects to a database is not a unit test by definition because it tests more than a unit.
Unit tests should use mock data.
It's quick, it's easy and you can think of all manner of weird mock data that may be hard to get in or out of a database.
So use unit tests to check whether A + B == C or if the IEmailService.SendMail is called for a (mock) customer who has "InvoiceByEmail" checked (and that it throws if the email address is empty) and that sort of stuff.
If something fails you know either your test or your logic is wrong, but never some third party component.
Use database and service connections for integration tests.
At this point your unit tests should've done their work and you can assume that the code is correct.
If something fails you know there's probably a problem with the connection and you can focus your efforts on finding the problem there.
I once used a library that could mock HTTP requests.
We used it to test whether the correct OData queries would be send to Microsoft Dynamics CRM (including headers and everything).
It's like unit testing your integration
|
|
|
|
|
One advantage to using mock data in unit tests is you can test your code's response to bogus data. With real data that's not always possible. At least, on purpose.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Mock data, because it's a lot easier to create edge cases, and I've never seen a DB with so-called mock data in it actually not get mangled and abused by other people.
The only exception is a weird middle ground, where you store your mock data in a table, not in code. Meaning, you don't rely on the real data structures, you just use the DB like a flat file to feed all your mock data test cases. Hope that makes sense. So yeah, a flat file would be a middle ground too.
|
|
|
|
|
Marc Clifton wrote: I've never seen a DB with so-called mock data in it actually not get mangled and abused by other people.
At a previous job we avoided that issue by having the setup step of the test suite create and initialize a fresh DB for running the unit tests with each execution. Which was great for avoiding DB abuse, but meant the startup time before running any tests was kinda blah.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
The short answer is that ideally you need both unit tests and integration tests.
The unit tests will affirm that a dev has not broken any specific methods within the system and the integration test will affirm the general 'health' of a system.
With unit tests it's sometimes necessary to mock data/functions as if you are just testing one aspect of the system you may need to provide inputs via mocking.
The reason unit tests are useful is that it immediately allows a dev to know if they have broken something - integration tests can take a long time to run and they as much harder to write.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Yes & No
We draw a distinction in style of testing, as in we would write tests in a integration style but not as far as a database. We have a common data access layer and mock out data returned with test data. Code above the data layer is all tested with concretes, so our tests do end with lots of data setup (but using Builders/Mothers/Factories we can enhance common data providers to many tests). The tests all run from the highest level possible and pass through all units - we have abstracted away the http/message infrastructure so our starting units are after the "command" reaches the domain.
In the past we would test each unit individually then moved over to a more concrete implementation testing.
Our conclusion is:
1. We have far fewer tests to maintain
2. Our tests are more resilient to change - e.g. one dependency change/refactoring doesn't result in 45 tests needing to be changed, but just update in a common data change (our tests focus on behaviours being met) if the change has no material change to expected output.
3. We can test integration of units quicker - e.g. using strategy/command patterns and only testing units that are mocked has bitten us badly, so testing full concrete implementations ensured correctness.
Where we deviate from the above pattern is where our tests need to test multiple paths in a specific unit (i.e. when the result is null given 3 out 6 inputs, constructor testing, builder testing, etc.), the cost of testing this from the upper level would have infeasible. Then we hit that unit directly with tests.
With this approach we have seen our developers making better testing decisions and code evolving better as they are not swamped with updating tests just because they need to do a refactoring for new functionality.
|
|
|
|
|
How do you define mock data? We are not allowed to use real-life customer data in our tests. But we sometimes use data from test system when it is too complex to craft it by ourselves for example. However this has nothing to do with the fact where you store it: hard-coded in your test or in a file or database. For me these are two different aspects.
|
|
|
|
|
I suspect anybody who's done a significant amount of automated testing will have experienced the frustration of spending more time maintaining the tests than the code itself. Too many, or too complicated tests can become a burden, so for me it's always a tradeoff between coverage and simplicity.
I've found that testing the full stack with a test DB (what I would call end-to-end tests) gives great test coverage - without these, it's quite possible to have lots of passing unit tests but a system that doesn't actually work when put together. However, maintaining the schema and data in the test DB is an overhead, and worse, end-to-end tests can be brittle and very hard to debug when you get a failure.
On the other hand, I've found that with unit or integration tests to get a useful test often requires quite a bit of mocking, which can quickly get quite complex, and lead to tests that can be... brittle and hard to debug when you get a failure.
Supporting end-to-end tests with good unit/integration tests gives the best of both worlds, but leads to lots of tests and lots of complexity, all of which requires maintenance.
I'm curious: You mentioned that you have preferences, and as you've experienced both strategies in several places, I'd be really interested to know what these are?
|
|
|
|
|
Design so you can mock the outside world as far as you can (database, filesystem, the system clock, even), but have some level of integration testing so that you verify the connections between your system and the outside world work.
I've done the 'use a test database' thing as well in the past, and that really wasn't a good thing...
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
We have a really complex database (uses four databases, with almost 1000 stored procs and hundreds of tables).
We've started redesigning it (and the app that uses it).
Due to the high complexity of the database itself, we will create a staging database that we can pull data from which will exercise specific use/edge cases, and when we start a test run, it will copy that staged data into the actual database (in our dev environment of course), and that way, the store procs in the actual database(s) can be used without any special "testing sql" in the database itself, short of some stored procs that verify certain data is present.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I think this has already been said.
By definition unit tests should be small, focused, fast and should not use external files, databases, rest.. anything like that. If you need to hit the file system create an interface and mock it. Unit tests can run on any machine without configurations, files, database connections .. They can run on DevOps or your choice of cloud with no extra configs. They just work.
Integration tests on the other hand don't have that restriction. For me it's basically a sandbox to make sure I can run that tracer thru the code correctly. Maybe figure out how to do something, benchmarks, etc. They are not run during CI/CD builds. I mark them as ignored just to be sure.
Also.. I keep them in separate assemblies just to be sure of no leaks.
ed
~"Watch your thoughts; they become your words. Watch your words they become your actions.
Watch your actions; they become your habits. Watch your habits; they become your character.
Watch your character; it becomes your destiny."
-Frank Outlaw.
|
|
|
|
|