|
I seem to be getting every now getting where the CodeProject AI times out, and it's like the service stops and restarts.
Shouldn't be a resource issue
GTX 1060
32gig Ram
Intel Xeon 8 core processor
|
|
|
|
|
Hello,
I've tried to build OpenCV libraries on Vitis HLS 2022.1 using MinGW and CMake, here are the steps I followed:
1) After installation on MinGW and added the bin path directory to the environment variables.
2) Installation of OpenCV 4.6.0
3) Installation of CMake
4) OpenCV Source Code Build using CMake, Added the Extracted OpenCV Source directory path on CMake (to Where is the source code) and the Build directory path on Cmake (under Where to build the binaries). Carried out configuration and generation.
5) Finally, Tried to build and install the OpenCV Libraries using the Command prompt with the command mingw32-make install from the Build directory path.
This resulted in the 2 errors below:
In file included from C:\Users\Nikhil\Downloads\opencv\sources\modules\core\src\precomp.hpp:53 ,
from C:\Users\Nikhil\Downloads\opencv\sources\modules\core\src\algorithm.cpp:43:
C:/Users/Nikhil/Downloads/opencv/sources/modules/core/include/opencv2/core/utility.hpp:718:14: error: 'recursive_mutex' in namespace 'std' does not name a type
typedef std::recursive_mutex Mutex;
^~~~~~~~~~~~~~~
C:/Users/Nikhil/Downloads/opencv/sources/modules/core/include/opencv2/core/utility.hpp:719:25: error: 'Mutex' is not a member of 'cv'
typedef std::lock_guard<cv::mutex> AutoLock;
^~
C:/Users/Nikhil/Downloads/opencv/sources/modules/core/include/opencv2/core/utility.hpp:719:25: error: 'Mutex' is not a member of 'cv'
C:/Users/Nikhil/Downloads/opencv/sources/modules/core/include/opencv2/core/utility.hpp:719:34: error: template argument 1 is invalid
typedef std::lock_guard<cv::mutex> AutoLock;
^
In file included from C:\Users\Nikhil\Downloads\opencv\sources\modules\core\src\algorithm.cpp:43 :
C:\Users\Nikhil\Downloads\opencv\sources\modules\core\src\precomp.hpp:368:5: error: 'Mutex' in namespace 'cv' does not name a type
cv::Mutex& getInitializationMutex();
^~~~~
modules\core\CMakeFiles\opencv_core.dir\build.make:102: recipe for target 'modules/core/CMakeFiles/opencv_core.dir/src/algorithm.cpp.obj' failed
mingw32-make[2]: *** [modules/core/CMakeFiles/opencv_core.dir/src/algorithm.cpp.obj] Error 1
CMakeFiles\Makefile2:1805: recipe for target 'modules/core/CMakeFiles/opencv_core.dir/all' failed
mingw32-make[1]: *** [modules/core/CMakeFiles/opencv_core.dir/all] Error 2
Makefile:164: recipe for target 'all' failed
mingw32-make: *** [all] Error 2
What am I doing wrong? What am I missing? How do I resolve the 2 errors?
Kindly pls help.
Regards.
[Moved to the AI forum]
modified 30-Sep-22 12:29pm.
|
|
|
|
|
This forum is not for programming questions. Please mouse over "quick answers" below the site banner at the top of the page, click on "Ask a Question", submit your question there, and then delete this one.
|
|
|
|
|
I've tried to blur many images, portrait and others but the blur is always applied to the whole image not only background. Where am i wrong?
|
|
|
|
|
There's a secret error somewhere in your secret code. You should fix that.
Seriously, how do you expect anyone to help you fix an error in code we can't see?!
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I don't understand what you mean
I'm using the "Codeproject AI Server" UI to try the features of this framework and that's where it doesn't work.
I wanted to attach an explanatory image but I don't see how to do it in this forum
blurring — ImgBB[^]
|
|
|
|
|
Read your question again, remembering that we can't see your screen, access your computer, or read your mind. Where in your question did you mention anything about the CodeProject AI Server?
If you don't provide proper details, then don't expect anyone to be able to help you.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
bad day? I thought it was a group only on this topic and so I took it for granted. Now there is everything I can provide as details
|
|
|
|
|
The forum dedicated to discussing CodeProject AI is linked to from the home page:
CodeProject.AI Discussions[^]
This forum is for general AI discussions.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
You'd need "object detection" to distinguish foreground from background (polygon); then you can address blurring the background.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Another way to do it might be via masking.
Some of the new AI image generation does similar where you paint a mask and there it represents what part of the image you want to change/alter.
Here it could just be a manual mask image of what you have decided is either background or foreground. Either way you can use the mask to control which pixels get blurred and how much.
It's not a bad intermediate step imo. You can plugin whatever object detection you like later. The new code generates masks using the object detection. From there, they feed into the same process you were already using with manual masks.
|
|
|
|
|
0) add tags and information if this blurred area must resize at runtime.
i know this question is from 12/22 !
1) check if CP AI can, or will, do this ... i assume no.
2) start here, and do research: [^]
3) load an already blurred background image which you swap in or out.
imho: the right thing is to store a prepared picture as a resource in the project, use that to fill a PictureBox that you show/hide at runtime.
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
|
|
|
|
|
After installing CodeProject AI Server on my Windows 10 machine for use with Blue Iris Security, I was unable to get the CodeProject AI Server service to start. I eventually was able to get it to start by modifying the CodeProject AI Server ImagePath in Windows Registry. I had to add quotes to the ImagePath ("C:\Program Files\CodeProject\AI\Server\CodeProject.AI.Server.exe"). I think the reason I had to add quotes to the path is because there is a space within the path string between "Program" and "Files." The CodeProject AI installer I used was CodeProject.AI.Server-1.5.0.exe. This installer doesn't include quotes around the ImagePath. I imagine that most systems don't encounter this problem even without the quotes. One of my installations did encounter the problem and one didn't, even though the ImagePath was identical on both machines. I think the installer should include the quotes by default since the quotes will prevent this problem on machines where there is a conflict in the path.
On the machine where I encountered this problem, now I am experiencing repeated restarts of the CodeProject AI Server service. I am wondering if this is somehow related to other calls made by the program to the same path. If so, maybe those calls should include quotes around the path. This is just a guess because I have not had much time to look into the restarts, yet.
I would suggest that maybe the CodeProject AI Server program should be broadly evaluated for instances where this type of path conflict could be a problem.
Disclaimer: I probably don't need to state the obvious, but I am not a programmer.
modified 4-Jul-22 14:40pm.
|
|
|
|
|
Thanks for the report.
A quick look around shows this issue could be an issue with the Windows MSI installer. We're wrapping the paths with quotes, but I've seen one developer manually go back and adjust the reg entry within the installer as a safeguard (just like you did manually).
We've added this to our TODO list.
cheers
Chris Maunder
|
|
|
|
|
MSI installer. Something to consider more. Thank you.
|
|
|
|
|
the CodeProject AI Server program should be broadly evaluated for instances where this type of path conflict could be a problem. True.
|
|
|
|
|
Quote: I have created a trading environment using tfagent
env = TradingEnv(df=df.head(100000), lkb=1000)
tf_env = tf_py_environment.TFPyEnvironment(env)
and passed a df of 100000 rows from which only closing prices are used which a numpy array of 100000 stock price time series data
df: Date Open High Low Close volume
0 2015-02-02 09:15:00+05:30 586.60 589.70 584.85 584.95 171419
1 2015-02-02 09:20:00+05:30 584.95 585.30 581.25 582.30 59338
2 2015-02-02 09:25:00+05:30 582.30 585.05 581.70 581.70 52299
3 2015-02-02 09:30:00+05:30 581.70 583.25 581.70 582.60 44143
4 2015-02-02 09:35:00+05:30 582.75 584.00 582.75 582.90 42731
... ... ... ... ... ... ...
99995 2020-07-06 11:40:00+05:30 106.85 106.90 106.55 106.70 735032
99996 2020-07-06 11:45:00+05:30 106.80 107.30 106.70 107.25 1751810
99997 2020-07-06 11:50:00+05:30 107.30 107.50 107.10 107.35 1608952
99998 2020-07-06 11:55:00+05:30 107.35 107.45 107.10 107.20 959097
99999 2020-07-06 12:00:00+05:30 107.20 107.35 107.10 107.20 865438
at each step the agent has access to previous 1000 prices + current price of stock = 1001 and it can take 3 possible action from 0,1,2
then i wrapped it in TFPyEnvironment to convert it to tf_environment
the prices that the agent can observe is a 1d numpy array
prices = [584.95 582.3 581.7 ... 107.35 107.2 107.2 ]
TimeStep Specs
TimeStep Specs: TimeStep( {'discount': BoundedTensorSpec(shape=(), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)), 'observation': BoundedTensorSpec(shape=(1001,), dtype=tf.float32, name='_observation', minimum=array(0., dtype=float32), maximum=array(3.4028235e+38, dtype=float32)), 'reward': TensorSpec(shape=(), dtype=tf.float32, name='reward'), 'step_type': TensorSpec(shape=(), dtype=tf.int32, name='step_type')}) Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int32, name='_action', minimum=array(0, dtype=int32), maximum=array(2, dtype=int32))
then i build a dqn agent but i want to build it with a Conv1d layer
my network consist of
Conv1D,
MaxPool1D,
Conv1D,
MaxPool1D,
Dense_64,
Dense_32 ,
q_value_layer
i created a list layers using tf.keras.layers api and stored it in dense_layers list and created a Sequential Network
DQN_Agent
`learning_rate = 1e-3
action_tensor_spec = tensor_spec.from_spec(tf_env.action_spec())
num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1
dense_layers = []
dense_layers.append(tf.keras.layers.Conv1D(
64,
kernel_size=(10),
activation=tf.keras.activations.relu,
input_shape=(1,1001),
))
dense_layers.append(
tf.keras.layers.MaxPool1D(
pool_size=2,
strides=None,
padding='valid',
))
dense_layers.append(tf.keras.layers.Conv1D(
64,
kernel_size=(10),
activation=tf.keras.activations.relu,
))
dense_layers.append(
tf.keras.layers.MaxPool1D(
pool_size=2,
strides=None,
padding='valid',
))
dense_layers.append(
tf.keras.layers.Dense(
64,
activation=tf.keras.activations.relu,
))
dense_layers.append(
tf.keras.layers.Dense(
32,
activation=tf.keras.activations.relu,
))
q_values_layer = tf.keras.layers.Dense(
num_actions,
activation=None,
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=-0.03, maxval=0.03),
bias_initializer=tf.keras.initializers.Constant(-0.2))
q_net = sequential.Sequential(dense_layers + [q_values_layer])`
`optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()`
but when i passed the q_net as a q_network to DqnAgent i came accross this error
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
68 optimizer=optimizer,
69 td_errors_loss_fn=common.element_wise_squared_loss,
---> 70 train_step_counter=train_step_counter)
71
72 agent.initialize()
7 frames
/usr/local/lib/python3.7/dist-packages/tf_agents/networks/sequential.py in call(self, inputs, network_state, **kwargs)
222 else:
223 # Does not maintain state.
--> 224 inputs = layer(inputs, **layer_kwargs)
225
226 return inputs, tuple(next_network_state)
ValueError: Exception encountered when calling layer "sequential_54" (type Sequential).
Input 0 of layer "conv1d_104" is incompatible with the layer: expected min_ndim=3, found ndim=2. Full shape received: (1, 1001)
Call arguments received by layer "sequential_54" (type Sequential):
• inputs=tf.Tensor(shape=(1, 1001), dtype=float32)
• network_state=()
• kwargs={'step_type': 'tf.Tensor(shape=(1,), dtype=int32)', 'training': 'None'}
In call to configurable 'DqnAgent' (<class 'tf_agents.agents.dqn.dqn_agent.dqnagent'="">)`
i know it has something to do with the input shape of first layer of cov1d but cant figure out what am doing wrong
at each time_step the agent is reciveing a observation of prices of 1d array of length 1001 then the input shape of conv1d should be (1,1001) but its wrong and i don't know how to solve this error
need help
|
|
|
|
|
Some more details about what this does step by step might help others to help you.
Nice code though.
Thanks.
|
|
|
|
|
Tutto è cominciato così:
1) ho dato in input al neurone javascript due numeri, due pesi e un bais.
2 ) il neurone ha calcolato un risultato che divergeva da quello da me previsto (target).
3) il neurone ha assottigliato, man mano, la divergenza fino ad arrivare a zero.
4 ) ora se rimetto al lavoro il neurone, mi da il risultato corretto.
5) questa operazione l'ho ripetuta per una serie di input con relativi target.
6) ho fatto la media di tutti i pesi e bais calcolati e mi ritrovo una retta.
Per ora sono fermo qui, ma il neurone non ha imparato nulla.
Qualche proposta ?
Riccardo
Translation:
It all started like this:
1) I gave input to the javascript neuron two numbers, two weights and a bais.
2) the neuron calculated a result that diverged from the one I predicted (target).
3) the neuron has thinned, gradually, the divergence until it reaches zero.
4) now if I put the neuron back to work, it gives me the correct result.
5) I repeated this operation for a series of inputs with relative targets.
6) I averaged all the weights and bais calculated and I find myself a straight line.
For now I'm stuck here, but the neuron hasn't learned anything.
Any proposals ?
Richard
|
|
|
|
|
Could A* be regarded as a Dijkstra algorithm that has the visited node chain spreading in all directions from the start point?
|
|
|
|
|
Of course, what's wrong with you?
The difficult we do right away...
...the impossible takes slightly longer.
modified 6-Nov-21 16:25pm.
|
|
|
|
|
|
No, it's the other way around. Dijkstra could be regarded as a special case of A*, with a heuristic of zero, resulting in a "circular" node visiting pattern (when possible), thanks to the heuristic of zero not "preferring" any particular direction. Some pedants may say "it's actually UCS that is a special case of A*, not Dijkstra", but UCS is a fancy name for how Dijkstra is normally implemented in practice. The original version of the algorithm that Edsger W. Dijkstra wrote down is rarely, if ever, used in practice, because it always touches all nodes, not just nodes that are dynamically encountered during the search.
|
|
|
|
|
Thanks for setting me on the track.
|
|
|
|
|
>>The original version of the algorithm that Edsger W. Dijkstra wrote down is rarely, if ever, used in practice, because it always touches all nodes
Is processing all nodes making the algorithm slow when you deal with large maps?
modified 8-Nov-21 10:34am.
|
|
|
|
|