The following code generates 100000 rows with random values for the Id column with uuid_generate_v4(). However, the nested selects are always choosing the same row so all the inserted rows have the same values for those columns. The goal is to create a table with 100k rows with random values taken from the other sample tables. Each of the sample tables only have two columns (Id and the column from which the values are taken). How can this be archived?
What I have tried:
insert into "Tag" (
"Id", "Time", "Account", "Name", "Value", "RollUpTableId"
)
select
uuid_generate_v4(),
current_timestamp,
(select "Account" from "AccountSamples" OFFSET floor(random()*358) LIMIT 1),
(select "Name" from "TagNameSamples" OFFSET floor(random()*19) LIMIT 1),
(select "Value" from "TagValueSamples" OFFSET floor(random()*26) LIMIT 1),
uuid_generate_v4()
from generate_series(1, 100000);
I've also tried the following that was suggested to me but gives me nulls(https://codeshare.io/5QD7dq) in many rows (I've no nulls in the sample tables and they only have 358, 19 and 26 rows each in that order.
insert into "Tag" (
"Id", "Time", "Account", "Name", "Value", "RollUpTableId"
)
select uuid_generate_v4(), current_timestamp, a."Account", tns."Name", tvs."Value", uuid_generate_v4()
from generate_series(1, 1000) x(rn)
left join (
select "Account", row_number() over(order by random()) rn from "AccountSamples"
) a on a.rn = x.rn
left join (
select "Name", row_number() over(order by random()) rn from "TagNameSamples"
) tns on tns.rn = x.rn
left join (
select "Value", row_number() over(order by random()) rn from "TagValueSamples"
) tvs on tvs.rn = x.rn
I've also tried
select "Account" from "AccountSamples" where "Id" = (trunc(random() * 358)::integer)