Try these examples yourself in repl.it.
First, is the ambiguity of what something is. For example, consider this Python example:
> a=[]
We have no idea what "a
" is an array of. Now, many people will say, it doesn't matter, you're not supposed to know, you just operate on the array. There is a certain point to that, which however can lead to trouble.
Let's try this:
> a=[1, 2, 3]
Ah, so you think we have an array of integers? Think again:
> a.append('4')
> a
[1, 2, 3, '4']
Whoa! That's an array of mixed types. In some ways that's cool, in other ways, that's dangerous. Let's say we want to add one to each element in the array, and we trust that the programmer that created / modified the array knows that it is supposed to be an array of int
s. But how would they know? Someone else can come along and not realize that they're appending the array with a string
. So now we come along, expecting a happy array of int
s, and do this:
> [x+1 for x in a]
TypeError: cannot concatenate 'str' and 'int' objects
Oops - we get a runtime error!
What happens in Ruby:
> a=[1, 2, 3, '4']
[1, 2, 3, "4"]
> a.map {|x| x+1}
no implicit conversion of Fixnum into String
What happens in Javascript:
> a=[1, 2, 3, '4']
[1, 2, 3, '4']
> a.map(function(x) {return x+1})
[2, 3, 4, '41']
Holy Cow, Batman! In JavaScript, the string
element is concatenated!
What does this mean?
It means that, among other things, the programmer must be defensive against, not necessarily the errors (sorry, I meant "usage") of other programmers, but certainly the lack of strong typing in the language. Consider these "solutions":
Python
> [int(x)+1 for x in a]
[2, 3, 4, 5]
Ruby
> a.map {|x| x.to_i + 1}
[2, 3, 4, 5]
JavaScript
> a.map(function(x) {return parseInt(x)+1})
[ 2, 3, 4, 5 ]
Of course, if you have a floating point number in the array, it'll be converted to an integer, possibly an unintended side-effect.
Another "stronger" option is to create a class specifically for integer arrays:
Python
class IntArray(object):
def __init__(self, arry = []):
self._verifyElementsAreInts(arry)
self.arry = arry
def __add__(self, n):
self._verify(n)
self.arry.append(n)
return self
def __sub__(self, n):
self._verify(n)
self.arry.remove(n)
return self
def _verifyElementsAreInts(self, arry):
for e in arry:
self._verify(e)
def _verify(self, e):
if (not isinstance(e, int)):
raise Exception("Array must contain only integers.")
a = IntArray([1, 2, 3])
a += 4
print(a.arry)
a -= 4
print(a.arry)
try:
a += '4'
except Exception as e:
print(str(e))
try:
IntArray([1, 2, 3, '4'])
except Exception as e:
print(str(e))
With the results:
[1, 2, 3, 4]
[1, 2, 3]
Array must contain only integers.
Array must contain only integers.
What this accomplishes is:
- Creating a type checking system that a strongly typed language does for you at compile-time
- Inflicting a specific way for programmers to add and remove items from the array (what about inserting at a specific point?)
- Actually doesn't prevent the programmer from manipulating
array
directly at least in Python, which has no way of designating member attributes as protected
or private
.
- JavaScript? It doesn't have classes, unless you are using ECMAScript 6, in which case, classes syntactical sugar over JavaScript's existing prototype-based inheritance.
The worst part about a duck-typed language is that the "mistake" can be made but not discovered until the program executes the code that expects certain types. Would you use a duck-typed language as the programming language for, say, a Mars reconnaissance orbiter? It'll be fun (and costly) to discover a type error when the code executes that fires up the thrusters to do the orbital insertion!
Which is why developers who promote duck-typed languages also strongly promote unit testing. Unit testing, particularly in duck-typed languages, is the "fix" for making sure you haven't screwed up the type.
And of course the irony of it all is that underlying, the interpreter still knows the type.
It's just that you don't.