Click here to Skip to main content
15,886,137 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Hello everyone,

I'm asking for your help in trying to solve my problem.
I don't know how to retrieve a LONG RAW field whithout buffering all datas from the cursor sent by my stored procedure.

I want to treat each recording one by one to avoid any risk of explosion of the RAM.

Is there a way to load all binary content of my field (dj.doc) into memory in my object (dto) without using InitialLONGFetchSize = -1 ?

Thanks for your advices.

I'm working on Oracle 10g database server, with a Client Oracle v12 (32 bits) and Visual Studio Pro 2017 (15.4.5) with the C# language

public static void Extract2Files(string scodeUF)
        {
            IDbCommand command = GetStoredProcCommand("GetAllOnlyForUF");
            command.Parameters.Add(CreateParameter("pin_id_unite_m", scodeUF, 12));
            command.Parameters.Add(CreateOutputCursorParameter("t_cursor"));
            
            command.InitialLONGFetchSize = -1; 

            command.Connection.Open();

            OracleDataReader reader = (OracleDataReader)command.ExecuteReader(); 
            // if (reader.HasRows) // <== this command is buffering all datas from the cursor sent by th stored procédure. I don't want this behavior !
                //{
                    
                    DtoParser parser = DtoParserFactory.GetParser(typeof(T));
                    parser.PopulateOrdinals(reader);

                    while (reader.Read()) // <== this command is buffering all datas from the cursor sent by th stored procédure. I don't want this behavior !
                    {
                        T dto = default(T); // null;
                        dto = (T)parser.PopulateDto(reader);
                        ...
                    }
                    reader.Close();
                //}


In PopulateDto function, i'm calling the function called GetBinaryFromReaderInOnePass.
The code below only works when command.InitialLONGFetchSize = -1
If command.InitialLONGFetchSize = 0 the retval value is always 0.

static public byte[] GetBinaryFromReaderInOnePass(IDataReader rdr, int ordinalBinaire)

// Size of the BLOB buffer.
int totalBinaryBufferSize = (int)rdr.GetBytes(ordinalBinaire, startIndex, null, 0, 0);

// The BLOB byte[] buffer to be filled by GetBytes
byte[] outByte = new byte[totalBinaryBufferSize];

// Read bytes into outByte[] and retain the number of bytes returned.  
retval = rdr.GetBytes(ordinalBinaire, startIndex, outByte, 0, totalBinaryBufferSize);


Here is the stored procédure where dj.doc is a LONG RAW type field :

PROCEDURE GetAllOnlyForUF(pin_id_unite_m VARCHAR2, cur_docsJoints out t_cursor) IS
BEGIN
    OPEN cur_docsJoints FOR SELECT dj.IDT, dj.name, dj.doc
                            FROM TB_PAT_DOC_JOINT dj, TB_MVT m
                            WHERE dj.id_mvt = m.id_mvt 
                            AND m.id_unite_m = pin_id_unite_m;
END GetAllOnlyForUF;


What I have tried:

When i use command.InitialLONGFetchSize = -1, the RAM fills with all the binaries of my cursor. This is not viable with thousands of records where each binary weighs 2Mb

When i use command.InitialLONGFetchSize = 0, i cannot determine the LONG RAW field size (DOC) with the method GetBytes

When i use command.InitialLONGFetchSize = 4096, all my binaries have a size of 4Kb.
no database round-trip was incurred to recover the entire binary.

I can not find a way out ...
Posted
Updated 8-Mar-18 7:16am

Use InitialLONGFetchSize = 0
The reason you don't get a returnvalue is probably that the reader don't know how to identify the correct returnvalue.
You need to include either the Primary Key or the RowID in the resultset. Obtaining Data From an OracleDataReader[^]

If you want to adjust the amount of data the reader buffers you can for example set the reader.FetchSize = reader.RowSize * 100; if you want to buffer 100 rows.
 
Share this answer
 
Comments
S3rval03 7-Mar-18 9:44am    
First of all thanks to you Jörgen for the speed of your answer.
Actually, I did not know the reader.FetchSize property which allows to set the value that one wishes in place of the default value 131072.
This allows to set InitialLONGFetchSize = -1 and limit the Fetch time to wait at each Read statement by setting a small number: reader.FetchSize = reader.RowSize * 1;
In this case, my GetBinaryFromReaderInOnePass function successfully extracted the binary file.
This partly answers my need thank you very much.
However, it seems to me that by setting reader.FetchSize = reader.RowSize * 1, the processing is less optimized in processing times.
Is there not a solution in the getter (with GetBytes)?
I already had my primary key (the dj.ID field) in the SELECT clause, but if in doubt, I also added dj.ROWID.
By setting InitialLONGFetchSize to:
- 0 ==> recovered binaries have a size of 0
- 4096 ==> recovered binaries have a size of 4096
In both cases the binaries are incomplete
This must be related to the way I use the GetBytes function which assumes that it will read all at once.
I had already read the Oracle online help and I understood that the round trip to the database to read the rest of the binary field was handled automatically.
Is this the case?
How could this work with the GetBytes function which asks to know the size of the buffer to fill?
If I call several times GetBytes, how do I know that the end of the binary has been reached without knowing the real final size of the binary?
If I came to know, how to gradually fill another byte [] in several times while its size must be declared beforehand?
That's a lot of questions I'm aware of. Thank you very much if you have any other suggestions.
Jörgen Andersson 7-Mar-18 14:23pm    
I've been digging a bit deeper into what's happening here.
It seems that using a stored procedure and a cursor prohibits the OracleDatareader from retrieving the key information. And in this case the whole document needs to be retrieved directly. (InitialLONGFetchSize = -1)

To find out if this is the case you can open the schematable and check the field info, DataTable SchemaTable = reader.GetSchemaTable();

Another test you can do is to retrieve the information directly by removing the output parameter, setting the CommandType = System.Data.CommandType.Text and the commandText = "SELECT dj.IDT, dj.name, dj.doc
FROM TB_PAT_DOC_JOINT dj, TB_MVT m
WHERE dj.id_mvt = m.id_mvt
AND m.id_unite_m = :pin_id_unite_m;

S3rval03 8-Mar-18 13:03pm    
Thank you very much, Jörgen for the precious help you have given to me !!

I put the details of my conclusions in the "my solution" field.
Jörgen Andersson 8-Mar-18 14:28pm    
You're welcome.
To summarize for all readers I had two problems:

1) first, the number of records retrieved by default when InitialLONGFetchSize = -1.

This first point can be set as proposed by Jörgen with reader.FetchSize = reader.RowSize * N,
with N = 1 for example to process a recording one after the other.

2) the second problem was the size value of the Long raw fields returned by the GetBytes function when InitialLONGFetchSize <> -1.
This is sytematic when I call my stored procedure from my C # code in Visual Studio.
When I replace the call to the procedure with a query defined in my C # code as a string,
I still have the problem with my query if it has a join between 2 tables.

string requeteDocJointUf = "SELECT dj.IDT, dj.NOM_DOC, dj.doc " +
                            " FROM TB_PAT_DOC_JOINT dj, TB_MVT m " +
                            " WHERE dj.id_mvt = m.id_mvt " +
                            " AND m.id_unite_m = '" + scodeUF + "'";


Indeed, looking at reader.GetSchemaTable (), I see that my primary key (dj.IDT) is not recognized as a key

Property: ColumnName                     Value: IDT 
Property: IsUnique                       Value: False
Property: IsKey                          Value: False
Property: IsRowID                        Value: False	


On the other hand, if I remove the join then my key is recognized and the GetBytes function returns the size of my binary field.

string requeteDocJointUf = "SELECT dj.IDT, dj.NOM_DOC, dj.doc " +
                            " FROM TB_PAT_DOC_JOINT dj " +
                            " WHERE dj.IDT= 274934";

Property: ColumnName                     Value: IDT 
Property: IsUnique                       Value: True
Property: IsKey                          Value: True <==
Property: IsRowID                        Value: False				


InitialLONGFetchSize = 0;

// Size of the BLOB buffer.
int totalBinaryBufferSize = (int)rdr.GetBytes(ordinalBinaire, startIndex, null, 0, 0); 
// <== totalBinaryBufferSize recovers well 136671 while InitialLONGFetchSize = 0! Yes!
// Property: IsKey  Value = True  for IDT 

// The BLOB byte[] buffer to be filled by GetBytes
byte[] outByte = new byte[totalBinaryBufferSize];

// Read bytes into outByte[] and retain the number of bytes returned.  
retval = rdr.GetBytes(ordinalBinaire, startIndex, outByte, 0, totalBinaryBufferSize); 
// <== the binary file is correctly recovered !!


I think that the problem of systematic operation with the procedure may come from the cursor declaration.
I will continue my research, and I will keep you informed if I find the solution with the call to the stored procedure.
To be continued ...

When i remove the join in the cursor of the procedure, my key is still not recognized:

TYPE t_cursor is ref cursor RETURN TB_PAT_DOC_JOINT%ROWTYPE;

PROCEDURE GetAllOnlyForUF(pin_id_unite_m VARCHAR2, cur_docsJoints out t_cursor) IS
BEGIN
    OPEN cur_docsJoints FOR SELECT * -- dj.IDT, dj.NOM_DOC, dj.doc
                            FROM TB_PAT_DOC_JOINT dj
                            WHERE dj.IDT = 274934;
END GetAllOnlyForUF;
 
Share this answer
 
Comments
Jörgen Andersson 8-Mar-18 14:28pm    
I'm curious as to whether the reason that the key doesn't work with the Stored procedure is because you're returning a cursor.
The cursor doesn't have a key. I would try to return a different kind of resultset instead, possibly an associative array (dictionary in DotNet).
Just speculating here.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900