|
Mike Lang wrote: My concern is more that some will hear that this is "faster than List"
Article: SlimList uses less memory than List, but it is also slower than List.
If they skip of the first sentence of the conclusion section of the article, that is their fault.
Mike Lang wrote: they now have a larger component that uses more memory
SlimList will use less memory in most cases. I will leave it to the application developers to decide if the few kilobytes that SlimList compiles to is too much. That is not really my concern. Really, though, the article exists to present a concept in a clear and concise manner. The specific code I attach to the article is only of ancillary importance.
Mike Lang wrote: setting the Capacity and adding your items is likely to be more efficient
How about a counter example? Bob starts up a game on his Alienware. He plays with max settings and trounces enemies with his mad skillz, because he is a l337 hax0r. Now Jill starts up that same game on her eMachine. She uses lower settings. Given that her computer has very little memory, she does not fire at enemies as rapidly as Bob, because it slows her computer down to have a bunch of bullets on the screen at once. It turns out that using a bunch of bullets makes the list structures used in the program grow so large that she starts to use virtual memory. Luckily, however, the programmer was kind enough to use data structures that grow based on the need of the user. So, while Jill is not going to get as high of a score as Bob, at least she can still play the game, so long as she fires her weapon less often. If the programmer had decided to set the capacity of the data structures involved to the amount expected to be reached by Bob (effectively using a constant and higher amount of memory), then the game might not even be playable by Jill. I made some simplifications in this example, but you get the idea. Dynamic data structures are useful. Different users have different demands.
Counter example 2. Bob is working on a spreadsheet. It is probably the most massive spreadsheet you have ever seen. It has millions of rows in it. Bob is very efficient at his job because he has enough RAM to support such a spreadsheet. Jill, on the other hand, does not have very much RAM. Luckily, she also does not work with very large spreadsheets. Also good for her is that the programmer used a dynamically growing list data structure to store the rows in her spreadsheet. As she adds rows, that list grows larger and larger. If the list had been designed such that the capacity of the list was set to millions of rows to suit Bob's needs, Jill would have been left in the cold using virtual memory, making her spreadsheet application virtually unusable. Here again, you see that list sizes cannot always be predicted and that growing them dynamically is of use. As far as showing this via a "performance test", I am not going to do that. That dynamic data structures are useful is a basic programming concept that most programmers understand. This is not an article for beginners (I marked it intermediate/advanced), so I'm not going to cover such basic concepts.
Mike Lang wrote: A counter balance in your article about performance testing and the inclusion of tests when it works well and when it does not would get a 5 vote from me.
I feel my article discusses both the pros and the cons of using SlimList very well. It doesn't happen to show performance testing because they are not necessary to convey the concepts, which is the only purpose of the article... not to prove that I'm a excellent programmer who has release a shining product to the world. Performance tests should be done by programmers if they are having performance issues with their applications. I included some graphics to illustrate the concepts presented in the article and adding charts would just be another visualization technique to present those concepts. That would be redundant and I'm not going to do it because some guy has a fetish for charts.
Mike Lang wrote: would get a 5 vote from me
Think about that. By including a few charts or graphs or whatever, you'd change the vote from 1 to 5? Do you really think my article deserves a 1, or are you just voting that to attempt to strong-arm me into making the changes you think should be done to the article? That is not a proper use of the voting system. You should vote what you think the article deserves; you should not vote an article to get your way.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
aspdotnetdev wrote: Dynamic data structures are useful
Sure they are, just not this one in its current form. This is exactly why List also grows dynamically. List starts relatively small to meet the needs of the average programmer, then it grows if it must. For advanced programmers they can set an initial size to prevent the resize.
aspdotnetdev wrote: Do you really think my article deserves a 1, or are you just voting that to attempt to strong-arm me into making the changes you think should be done to the article
No my 1 vote is because that is all the writing is worth when it doesn't explain how well it compares to the alternative(s). Especially now that I ran a performance test (see other comment), it definitely is not worth a 5. I only expected a minor 10% drop in performance based on your article, which would have been acceptable in some scenarios for the memory gain.
aspdotnetdev wrote: Given that her computer has very little memory, she does not fire at enemies as rapidly as Bob, because it slows her computer down to have a bunch of bullets on the screen at once
With SlimList she may not run out of memory, but her frame rate would be so slow due to the performance issues that she would be too frustrated to even make the game worth playing.
Memory and performance go hand in hand. you can chase one without consideration for the other. It is fine to trade off one for the other given your situation, but both need to remain in a tolerable range.
If you rewrite this to solve the performance issue to within a tolerable range, I'd be happy to review the project again.
|
|
|
|
|
Mike Lang wrote: For advanced programmers they can set an initial size to prevent the resize.
Review my spreadsheet counter example again. This has nothing to do with the adeptness of the programmer's skill.
Mike Lang wrote: No my 1 vote is because that is all the writing is worth when it doesn't explain how well it compares to the alternative(s).
I do explain how well it compares to others. I say the speed performance sucks compared to List, but the peek memory performance is a 3:2 ratio. I event point out specific areas of the SlimList data structure and associated algorithms to explain how they compare to List.
Mike Lang wrote: I only expected a minor 10% drop in performance based on your article
Where do I say that there will be a minor 10% drop in the article? You made that assumption out of nowhere, and you proved yourself incorrect.
Mike Lang wrote: With SlimList she may not run out of memory, but her frame rate would be so slow due to the performance issues
That was just an example I pulled out of my ass. Refer to the spreadsheet example, as that gives a more clear and valid example.
Mike Lang wrote: If you rewrite this to solve the performance issue to within a tolerable range
You are missing the point of the article. The article never says SlimList will perform fast or that it should be used in a real application in its current state (in fact, the article says the exact opposite, that it should probably not be used in a production application). The point is to show that List isn't the only game in town and that SlimList improves memory usage. You are trying to make this article about something it's not, and I will not modify it because you want SlimList to do something it was never designed to do. Let me make it simple for you:
Premise: Lower memory = good
Observation: SlimList = lower memory than List
Conclusion: SlimList = good for lower memory
Other aspects of the data structure, such as the complexity required to implement it and it's CPU usage are ancillary to the main idea that the memory usage is better than List.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
I really like the idea - just wanted to clarify something. You said that when a normal List's capacity is exceeded, when it's being copied, it takes up 3 times the memory. This makes sense for value types, since the values are in two places, along with a bunch of default values for the empty spaces.
For reference types, the new elements would be null, not a default value, so I'd expect that they wouldn't take up any memory - in that case, the worst case would be 2x the amount of memory.
Let me know if I'm missing something.
Joe Enos
joe@jtenos.com
|
|
|
|
|
Even null values take up space. Reference types work like C++ pointers. They point to the memory address of a value. If you set that reference to null, it is actually setting the address reference to 0. You can think of reference types as integers:
int x = 0;
Just because an integer is set to 0 does not mean it does not take up memory. It still takes up 4 bytes of memory, but each bit in those bytes is set to 0. References work very similarly:
MyClass c = null;
That "c" is taking up 4 bytes because that is the number of bits it takes to reference a memory address on a 32-bit system. On a 64-bit operating system, that "c" would be taking up 8 bytes, even though it is set to null (all the bits in those 8 bytes would be set to 0). This works the same way with arrays:
MyClass[] cArray = new MyClass[100];
That array will take up 4 * 100 bytes of memory (so, 400 bytes of memory) regardless of what each value is set to. If the values are set to something, then extra memory would be taken up (by the references and by the values to which the reference point).
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
The way you chose is really good. But something confusing me. After all, to store array, you're using List . The Capacity has 1024 value that mean you can add 1024 arrays in that SlimList ? or why the first and second array have same length. Or maybe I didn't get "SlimList Uses 2x Memory" part
TVMU^P[[IGIOQHG^JSH`A#@`RFJ\c^JPL>;"[,*/|+&WLEZGc`AFXc!L
%^]*IRXD#@GKCQ`R\^SF_WcHbORY87֦ʻ6ϣN8ȤBcRAV\Z^&SU~%CSWQ@#2
W_AD`EPABIKRDFVS)EVLQK)JKQUFK[M`UKs*$GwU#QDXBER@CBN%
R0~53%eYrd8mt^7Z6]iTF+(EWfJ9zaK-iTV.C\y<pjxsg-b$f4ia>
-----------------------------------------------
128 bit encrypted signature, crack if you can
|
|
|
|
|
I'm a little confused, where did you see the value of 1024? And I'm not using a List to store an array. SlimList internally uses an array of arrays (an array is not the same thing as a System.Collections.Generic.List). The array that stores the other arrays is about 32 in length (plus or minus 1, I forget).
The first and second array have the same length because it enforces the statement that "every array is the same size as the total number of elements before it". Of course, the first array can't satisfy that condition, because there are no elements before it, so I just chose 2 as the size of the first array (made the math make more sense). Since the first array only has two elements, the second array must also have 2 elements. However, the third array is double the size because the total number of elements has increased (2 + 2 = 4). And the fourth array has twice that (2 + 2 + 4 = 8).
The "2x Memory" refers the the amount of memory used when the SlimList has to increase it's capacity. Since it creates an array that is the same size as the existing elements, half of the elements will be used and half will be empty. This means that 2x the required amount of memory is used. With a List, this value is 3x because it creates an array that is twice the size of the existing elements. That is only temporary, as it copies elements from the old array to the new array (after which the value drops back to 2x), but it is still an unnecessary waste that could cause problems on low memory systems.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
aspdotnetdev wrote: The first and second array have the same length because it enforces the statement that "every array is the same size as the total number of elements before it". Of course, the first array can't satisfy that condition, because there are no elements before it, so I just chose 2 as the size of the first array (made the math make more sense). Since the first array only has two elements, the second array must also have 2 elements. However, the third array is double the size because the total number of elements has increased (2 + 2 = 4). And the fourth array has twice that (2 + 2 + 4 = 8).
This is much better than you wrote above
btw, what do you thing about this ? Just a thought
values[0] = new int[1];
values[1] = new int[values[0].Length * 2];
values[2] = new int[values[1].Length * 2];
values[3] = new int[values[2].Length * 2];
values[4] = new int[values[3].Length * 2];
values[5] = new int[values[4].Length * 2];
values[6] = new int[values[5].Length * 2];
...
values[31] = new int[values[30].Length * 2];
TVMU^P[[IGIOQHG^JSH`A#@`RFJ\c^JPL>;"[,*/|+&WLEZGc`AFXc!L
%^]*IRXD#@GKCQ`R\^SF_WcHbORY87֦ʻ6ϣN8ȤBcRAV\Z^&SU~%CSWQ@#2
W_AD`EPABIKRDFVS)EVLQK)JKQUFK[M`UKs*$GwU#QDXBER@CBN%
R0~53%eYrd8mt^7Z6]iTF+(EWfJ9zaK-iTV.C\y<pjxsg-b$f4ia>
-----------------------------------------------
128 bit encrypted signature, crack if you can
|
|
|
|
|
Xmen W.K. wrote: what do you thing about this ?
I throw an exception (although I'm not sure if I handle that properly in the case that the capacity is set too large in the constructor). Not really much else to do when the maximum capacity of the list has been exceeded. I always get an out of memory exception before I can test that though... wonder why.
Xmen W.K. wrote: This is much better than you wrote above
Yeah, I didn't really feel that specific detail deserved enough attention to take up room in the article (the important part is that the arrays double in size most of the time). I typically write articles for understandability and so the largest number of people will read it, part of which is compactness. If an article is too long, people tend to avoid reading it (I know this because I do the same thing).
By the way, you've made some very detailed remarks. I'm glad to see at least one person read my article
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
aspdotnetdev wrote: I throw an exception (although I'm not sure if I handle that properly in the case that the capacity is set too large in the constructor). Not really much else to do when the maximum capacity of the list has been exceeded. I always get an out of memory exception before I can test that though... wonder why.
nah nah, you got me wrong....I was just showing an example of using size 1 in first array. I really don't get why you using size of 2 in first one.
aspdotnetdev wrote: If an article is too long, people tend to avoid reading it (I know this because I do the same thing).
Exactly...so do I
TVMU^P[[IGIOQHG^JSH`A#@`RFJ\c^JPL>;"[,*/|+&WLEZGc`AFXc!L
%^]*IRXD#@GKCQ`R\^SF_WcHbORY87֦ʻ6ϣN8ȤBcRAV\Z^&SU~%CSWQ@#2
W_AD`EPABIKRDFVS)EVLQK)JKQUFK[M`UKs*$GwU#QDXBER@CBN%
R0~53%eYrd8mt^7Z6]iTF+(EWfJ9zaK-iTV.C\y<pjxsg-b$f4ia>
-----------------------------------------------
128 bit encrypted signature, crack if you can
|
|
|
|
|
Xmen W.K. wrote: nah nah, you got me wrong
Indeed I did. The calculations made more sense. If I made the first array have a size of 1, I would have had to shift the base 2 log calculation (i.e., I would have to add or subtract from the index before performing the log calculation in order to get the correct index into the "major" array). I figured it would be easier to use a size of 2 rather than add complexity (and CPU usage) to the calcuation that is used for every index operation.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
Oh, you must have seen 1024 in that screenshot of the Visual Studio debugger. That was just a sample of what the capacity happened to be of that SlimList at that time (when I paused the debugger). The capacity of a SlimList is just like the capacity of a List... it refers to the total number of elements that can be stored in the SlimList at any given point in time. Once the count exceeds the capacity, the capacity doubles.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
Statistical charts of concrete examples of usage of SlimList would be really helpful.
Let's say, 100, 1000, 10000 items; integer or complex classes.
Best regards,
Jaime.
|
|
|
|
|
The article has a section called "Qualitative Comparison of SlimList and List". I purposefully tacked on the "qualitative" to the front of that because I wanted to be clear that I was not going to provide a quantitative analysis (or, as you call it, "statistics"). Note that the first sentence in that section says "It is not really worthwhile for me to do a quantitative analysis of the SlimList performance, as this article is purely theoretical." I state in the article that SlimList has the same performance as List, with an extra factor added on. For example, getting element at index 5 might take 2.45 * O(1) for List and 15.2 * O(1) for SlimList. Those specific factors (e.g., 2.45 and 15.2) are not important, because the purpose of the article is to present the memory savings that SlimList provides, not any speed enhancements (or, in this case, reductions). I concede in the article that SlimList is slower than List, but I do not care to experiment with exactly how much slower. Those statistics would not be of much use in the long run anyway, considering my architecture is not going to match somebody else's architecture and considering my implementation is not optimal (I mention I use Math.Log rather than BSR in the article too).
Thanks for your suggestion, but if you actually plan on using SlimList for your purposes and you care about speed, this may not be the data structure for you. Just keep in mind the speed will be comparable to List (with that extra factor I was talking about).
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
It's a clear article, so you got my 5. It's clear to me now MS created a thing with a really stupid copy action (bizarre and unexpected). I see how you improved that. But what i don't understand is (but i'm not in your language): why not use a dynamic growing array. What's the problem with that?
Rozis
|
|
|
|
|
I assume you mean to ask why I do not use Array.Resize. Well, calling Array.Resize actually just creates a new array of the desired size and copies elements over to the new array from the old array (this is pretty much what List does). Or so that is what I've read online. Otherwise, a List is basically a dynamically growing array, as is a SlimList. At a high level, they do the same thing; it is only in the implementation details and performance that they differ.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
aspdotnetdev wrote: Well, calling Array.Resize actually just creates a new array of the desired size and copies elements over to the new array from the old array (this is pretty much what List does).
Pff, no wondering why its so slow... Your approach to avoid the copy-action is the right one, but if you could find a way to make an array bigger without the copying, you could also get rid of the multi-array overhead. But I guess it is not possible in your programming language...(?)
Rethinking your article over, is your solution a way to mimic a truly dynamic array (an array that can grow without copying previous stored items first) for an programming environment that does not support this concept - good work!
Theoreticly you could ran into trouble if the number of elements is bigger then your mayor array can hold (but 2 power 31 is very much). On the other hand if the number of elements is low, you reserve more space in the mayor array then needed - also trivial.
Then, if i have 9 elements actually space is reserved for 16 (2,2,4,8). If there's a possibility to set the increment factor (in your case 2) to for example 1.5 you could win some memory...
I'm just searching for further improvements...
Rozis
|
|
|
|
|
Yeah, you could call it a dynamic array (it grows without copying elements).
The size of the array is limited based on the size of the int data type. int can store about 4 billion numbers. Since half of those are negative, that makes about 2 billion you can use. My particular implementation doesn't worry too much about edge conditions and, because of that, this number is further halved so that only about a billion elements can be stored in the SlimList. Keep in mind that List can only use about 2 billion elements, because it too uses int indexing. However, that large of a list should suffice for most purposes. If necessary, a version could be made that uses the long data type rather than the int data type. This would allow the SlimList to hold roughly 2^62 elements, which is far more than anybody's going to need for some time. The "major" array would then have to be increased to about 63 in that case. Naturally, I could store that "major" array as a List instead and that would remove some of the overhead, but I didn't feel like doing the extra work.
Yeah, the increment factor could be lowered, but that would cost you. For one, the "major" array would grow as the factor shrank, which would increase the overhead. As you approach 1, you end up where you started... your overhead array could end up being several billion elements long. Also, 2 is a good number because optimizations can be made to count binary digits rather than performing an actual log base 2 calculation. Counting binary digits on the processor is MUCH faster than a double precision log calculation.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
Rozis wrote: But I guess it is not possible in your programming language...(?)
Just wanted to point out that no language actually supports truly dynamic arrays. Languages like VB allow you to use operators to "change" the size of an array, but all it really does is use the create-and-copy method that Array.Resize provides. This is due to the nature of arrays and how memory is allocated; an array must be a consecutive block of memory for the indexing to work. If even a single byte following the array has been allocated, resizing the array becomes impossible without moving the entire thing to a new location (and since you cannot just move it, you need to create a new one at a new location and copy the elements).
In another comment, aspdotnetdev stated that he receives an out of memory exception before he can reach the maximum theoretical capacity of SlimList , it is most likely a result of the system not being able to find a large enough block of free space to allocate one of the minor arrays. In other words even if you have a gig of free memory, the largest array you can allocate is limited to the largest consecutive block of unallocated memory within that gig (which might only be a few megs due to memory fragmentation). Trying to allocate anything larger results in an out of memory exception even though there is plenty of memory available (it's a poorly named exception for situations like this).
Sorry if my explanation is a little lacking, I'm really tired and having trouble forming sentences . You might want to google for more on the subject. As for the article, it's an interesting piece of work and it'll be getting my 5. Oh, and I'd suggest reducing the length of the major array (maybe to 16, since the memory required for the minor arrays at that point probably won't be available on most modern machines) and using a linked list approach to chain an unlimited supply of major arrays onto each other. That would get rid of the theoretical limit on capacity at the cost of indexing performance after the first 16^2 elements... or something like that...
|
|
|
|
|
Thanks for explaining all that to Rozis for me... I didn't really feel like getting into that level of detail.
Just thought I'd mention a few things about your above comments (you've gone through all the work of writing them, so I thought the least I could do was proof read them).
fire_birdie wrote: it'll be getting my 5
Thanks. That ought to help offset that person who voted a 1 and the other who voted a 3.
fire_birdie wrote: I'd suggest reducing the length of the major array (maybe to 16...
That would make the maximum capacity around 128KB. Around 30 works well because that limits the capacity to around a gig.
fire_birdie wrote: no language actually supports truly dynamic arrays
That may be true, although truly dynamic arrays may be possible to a certain extent. Supposing there is memory contiguous to the array, the array could be expanded without much trouble. I'm not sure how feasible this is, but it could work very well for small arrays. And it might be possible to set up guards against fragmentation, such as but spacing out arrays to give them space to grow. Might even have developers supply "potential capacities" to give the runtime an idea of how large the array might grow to. This could assist the memory organizer (for lack of a better term) in deciding how much free space to leave in front of the array. But that's all theoretical... not sure if any language actually does that. Would probably have to write an operating system that supports such a model, as I think that is usually where the memory allocation is done.
fire_birdie wrote: using a linked list approach to chain an unlimited supply of major arrays
That would slow it down to a different level of big-O notation. Indexing would then potentially take O(Log(N)) rather than O(1) because linked lists take O(Log(N)) to access arbitrary elements. I was, however, thinking of using a List. That would remove the overhead and would not slow things down much (it would remain O(1) when ameliorated).
fire_birdie wrote: 16^2 elements
16^2=256. I'm guessing you meant 2^16? Anyway, I imagine you were getting at what I explained above about O(Log(N)) performance.
Thanks again for the very detailed comments!
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
aspdotnetdev wrote: That ought to help offset that person who voted a 1
Don't worry, I voted against that guy's comment. Its sad to see people don't appreciate the amount of effort authors put into writing the articles in the first place.
aspdotnetdev wrote: That would make the maximum capacity around 128KB. Around 30 works well because that limits the capacity to around a gig.
Heh, yeah, I'll admit to just halving the value you used. Problem with 30-something is available memory; try running a test to see when that out of memory exception gets thrown, my bet is somewhere in the mid-20s.
aspdotnetdev wrote: Might even have developers supply "potential capacities" to give the runtime an idea of how large the array might grow to.
Think about that statement carefully. Essentially it'd be the same as allocating the "potential" capacity straight off the bat since the runtime would have to keep n bytes free after the array anyway, it just wouldn't be accessible until you ask the runtime to expand the array.
aspdotnetdev wrote: That would slow it down to a different level of big-O notation.
Sad but true. Although the way I see it, SlimList is more a solution for memory usage than it is for performance. So now the real question is, which problem are you trying to get around? It'd be nice to have both, but that's just not how these things work...
aspdotnetdev wrote: 16^2=256. I'm guessing you meant 2^16?
LOL, yes, I think thats what I was aiming at... I did mention I'm tired, right?
Keep up the good work.
birdie
|
|
|
|
|
fire_birdie wrote: Its sad to see people don't appreciate the amount of effort authors put into writing
Indeed.
fire_birdie wrote: see when that out of memory exception gets thrown
I was thinking that could be addressed by partitioning the minor arrays into smaller arrays (say, each of 1MB). That way, the fragmentation problem is circumvented. However, I didn't feel like implementing it. And that would lower performance a little more (a third index operation). Perhaps I'll add it to the bottom of the article.
fire_birdie wrote: the runtime would have to keep n bytes free after the array
What I was envisioning was a "try to keep n bytes free" scenario, not a "must keep n bytes free" scenario. That way, the memory can still be used, but the memory manager tries its very hardest to keep that space available in case the array gets bigger. However, one would then have to consider what effect such an algorithm would have on performance. Still, things could be optimized... for example, only arrays with potential capacities greater than 1MB might participate in this memory management, meaning any hit they take on performance would be ameliorated by the size of the array. Anyway, this is a whole other topic that a friggin' PhD could be targeted toward, so I think I'll end the discussion of that right there, if you don't mind
fire_birdie wrote: which problem are you trying to get around
Memory was the primary goal with this data structure. But I found increasing the O(...) time to be unacceptable. Increasing it by some constant factor didn't seem to bad, but I didn't want to degrade performance beyond that. I did the same thing with the StringBuilderPlus article I wrote. I wanted to add functionality without hurting performance (which is the reason I shied way from string insert operations).
fire_birdie wrote: Keep up the good work
Thank you, I shall try.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
fire_birdie wrote: Don't worry, I voted against that guy's comment. Its sad to see people don't appreciate the amount of effort authors put into writing the articles in the first place.
To date, I've written one more article on codeproject than this author. You have no grounds to say that I don't know what it takes to write an article. I don't see any articles by you on this site either. Do YOU know what it takes to write an article?
I've received bogus and/or unexplained 1 votes on my articles also. Just deal with the fact that different people rate articles with differing valuing systems. At least I gave a reason for my 1 vote, and I told the author what he could do to change my vote.
|
|
|
|
|
Again, you use a fallacy, in this case the ad hominem argument. Just because somebody has not made an article (or has not created as many articles as you), does not mean they do not know that your vote of 1 was completely preposterous.
Visual Studio is an excellent GUIIDE.
|
|
|
|
|
You miss my point. My point is that I DO know what it takes to write an article. He has no grounds to say that I do not. The extra point that he likely does not know what it takes to write an article was purely bonus to my argument.
You see I like to show a balance taking as many viewpoints as possible, rather than focus on a single side of the argument.
|
|
|
|
|