|
I don't see you retrieving any. Usually, the question would be: How do I retrieve the "next" 20? Then, the "previous" 20. And so forth.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Hi Gerry,
Yes, it does retrieve info already as shown in the screenshot I linked, but it displays it all on a single page. This would cause a lengthy page and users would have to scroll to the bottom which can be never ending. I am looking for a solution to have it display only 20 items per page, with a next and previous button as previously mentioned.
Thanks much
|
|
|
|
|
You didn't show the code that queries the database and returns it. I'm going to assume it's a simple query that returns everything in a table.
You're basically going to have to completely re-write that code, and the code you posted, so it can track which page the user is on, and take parameters for what to return, like the number of items per page and which page to return.
There's plenty of examples on the web for "ASP.NET database paging". All you have to do is search for that.
|
|
|
|
|
The plumbers are here and working in my apratment room so to pass the time I'v fired up my web browser and coming across your question, I'm prompted to provide a few links which might be helpful. One is at StackExchange and in this particular post there's another link which is to C# Corner. Check them out:
https://stackoverflow.com/questions/4293805/how-to-show-20-rows-on-datagrid-each-time[^]
Example of DataGrid in ASP.NET[^]
And to top all this off, there's another link that comes full circle back to CP:
Using ROW_NUMBER() to paginate your data with SQL Server 2005 and ASP.NET[^]
What I seem to do when I use someone elses code is try anything that remotely resembles what I want to see happen ... but of course the example/sample's got to compile first!. That second C#Corner webpage even shows images of the expected output from the running.
Perhaps too old that CP SQLServer suggestion but I've always had good luck fronting data with the various flavors of SQL and there's always a ton of it on SO.
|
|
|
|
|
I have to transfer data from MFC Application to C# application every 1 second
Please find below the MFC Application (Server) and c# application (Client).
I have a issue. Both the application's memory keeps increasing in task manager every 1 or 2 seconds. Please help me to figure out the issue in my code.
Server Code in MFC applicationvoid SendDataToNamedPipe(CString sData)
{
char czVal[PIPEMAXSIZE]; //PIPEMAXSIZE = 10000
memset(czVal,0,PIPEMAXSIZE);
strcpy(czVal, sData /*buffer*/);
CString sPipeName = "\\\\.\\pipe\\" + "Pipe";
LPCTSTR lpszPipename = (const char*) sPipeName;
hNamedPipe = CreateNamedPipe(
lpszPipename, // Name of the pipe
PIPE_ACCESS_OUTBOUND, // Pipe access type
PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,
1, // Maximum number of instances
0, // Out buffer size
0, // In buffer size
0, // Timeout
NULL
);
if (hNamedPipe != INVALID_HANDLE_VALUE)
{
BOOL fConnected,fSuccess;
fConnected = ConnectNamedPipe(hNamedPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
//// Assuming 'dataToSend' contains the data to send
WriteFile(hNamedPipe, czVal, PIPEMAXSIZE /*(DWORD)sizeof(czVal)*/, NULL, NULL);
DisconnectNamedPipe(hNamedPipe);
CloseHandle(hNamedPipe);
}
}
Client Code C# Applicaition:
///////////////////
public void RecivePipeClient()
{
NamedPipeClientStream pipeClient = new NamedPipeClientStream(".", "Pipe", PipeDirection.In, PipeOptions.None);
pipeClient.Connect();
StreamReader reader = new StreamReader(pipeClient);
string dataValue = reader.ReadLine();
}
//Calling Thread function in MFC Application (Server)
void ThreadNamedPipeTagLiveValue(LPVOID)
{
CString strValue= "";
while( 1 )
{
POSITION pos1 = OLivePointList.GetHeadPosition();
while(pos1 != NULL)
{
CTagBase* OTemp = (CTagBase*) OLivePointList.GetAt(pos1);
CTime TNow;
TNow = CTime::GetCurrentTime();
strTime=TNow.Format("%m/%d/%Y %H:%M:%S");
CString strTemp;
strTag = OTemp->GetTagName();
if((OTemp->GetTagType() == CONTROLLER))
{
OTemp->GetTagValues(OLivePoints);
sTempValue.Format("%5.2f", OLivePoints.PV);
strValue = strValue + sTempValue + ",";
}
}
SendDataToNamedPipe(strValue);
Sleep(1000);
}
}
|
|
|
|
|
Is that the only thing that's wrong with it, the memory keeps increasing?
Does it work otherwise?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Just from a design point of view, are you sure you want to be creating and closing a new pipe every second? If you're going to be up and running for a bit, why deal with the overhead of constantly creating and destroying a pipe?
Also, in your C# code, you should use a using statement on your pipeClient variable. Its lack may be why you seem to be leaking memory due to incomplete cleanup of your pipe.
Be wary of strong drink. It can make you shoot at tax collectors - and miss.
Lazarus Long, "Time Enough For Love" by Robert A. Heinlein
|
|
|
|
|
Apologies but this is going to be a "How to" question rather than a technical question. I am very new to C# and using Json. I have a CSV file as follows:
Edit - I did manage to find some code here - https:
Time Control_Code Metric Organisation Value DateTime
2018-10-21T00:08:03 JKX 3721 AD450 20 2018-10-21T00:08:00
2018-10-21T00:08:03 BHY 1234 HG650 88 2018-10-21T00:08:00
I need to produce multiple JSON output files from that csv in the following format example:
{
"Time":"2018-10-21T00:08:03",
"Control_Code": "JKX",
"metrics": [
{
"Metric": 3721,
"Organisation":"AD450",
"Value": 20,
"Datetime":"2018-10-21T00:08:00"
},
{
"Metric": 1234,
"Organisation":"HG650",
"value": 88,
"datetime":"2018-10-21T00:08:00"
}
]
}
Now the extra problematic part on top of this is that there is a requirement where only one Control_Code may be used per Json.
Each Json generated must contain a single Control_Code and all related metric values in the metrics array. So the csv will need to be scanned for each different Control_Code and then produce an output for that specific Control_Code and then do the same for any subsequent Control_Codes.
So for example a different Json would be produced with a different Control_Code (from the same csv file) - Example (notice different Control_Code other values will of course change as well, but just providing an example).
{
"Time":"2018-10-21T00:08:03",
"Control_Code": "BHY",
"metrics": [
{
"Metric": 3721,
"Organisation":"AD450",
"Value": 20,
"Datetime":"2018-10-21T00:08:00"
},
{
"Metric": 1234,
"Organisation":"HG650",
"value": 88,
"datetime":"2018-10-21T00:08:00"
}
]
}
Thanks for any advice/information in advance.
|
|
|
|
|
Concentrate on reading the CSV first - I use A Fast CSV Reader[^] - it does all the donkey work for you and can load it into a DataTable very easily.
When you have that, create the appropriate C# classes to hold your expected data - running your JSON data sample through Convert JSON to C# Classes Online - Json2CSharp Toolkit[^] gave these so that'll be a good place to start:
public class Metric
{
public int Metric { get; set; }
public string Organisation { get; set; }
public int Value { get; set; }
public DateTime Datetime { get; set; }
public int? value { get; set; }
public DateTime? datetime { get; set; }
}
public class Root
{
public DateTime Time { get; set; }
public string Control_Code { get; set; }
public List<Metric> metrics { get; set; }
}
Then process the table data to fill out your classes. When you have them, use Json.NET - Newtonsoft[^] to produce the JSON string - you can do what you like with it from there.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Steps
1. Determine how to parse the CSV correctly
1.a. Load it into a flat (list) data structure.
2. Sort it
2.a Determine the appropriate collection where the control code is the primary key.
2.b Iterate through 1.a. If the control code already exists, add record. If it does not exist, create it, then add it.
3. Determine how to create appropriate json. (NOT from the above, but rather just putting it out with fake data.)
3.a Learn how to write a file
3.b Learn how to write a json format.
4. Iterate through 2.a, feed into 3, to product file.
|
|
|
|
|
Your design is flawed; it doesn't reflect the data; and results in a redundant data structure.
The "control code" is at the metric level; where "Time" is the key of the "root".
From there you can create (other) "object graphs" (more easily).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
That's the problem with CSV: it's flat data store with no hierarchic structure - which always leads to redundant duplication as the only way to build a "proper" structure when the flat data is processed.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I'll accept whatever CSV has to offer under the circumstances (e.g. Excel -> csv), but that doesn't mean propagating the "bad design" in the ETL phase ... one can always go back to the bad design after that.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Gerry Schmitz wrote: Your design is flawed;
Good catch.
That would need to be resolved before anything can proceed.
|
|
|
|
|
I have an ItemObservableCollection called _MyChildren with a collection of clsChild. I am trying to get nonfictions when a clsChild.name string is changed.
Error I am getting on MyChildren.ItemPropertyChanged:
Severity Code Description Project File Line Suppression State
Error CS1661 Cannot convert anonymous method to type 'EventHandler<ItemPropertyChangedEventArgs<clsChild>>' because the parameter types do not match the delegate parameter types C-Sharp-Tests C:\Users\Jeff\Documents\GitHub\C-Sharp-Tests\C-Sharp-Tests\C-Sharp-Tests\clsParent.cs 21 Active
clsParent:
using Countdown;
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace C_Sharp_Tests
{
internal class clsParent : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
private ItemObservableCollection<clsChild> _MyChildren;
public clsParent()
{
_MyChildren = new ItemObservableCollection<clsChild>();
this._MyChildren.ItemPropertyChanged += delegate (object sender, PropertyChangedEventArgs e)
{
if (string.Equals("IsChanged", e.PropertyName, StringComparison.Ordinal))
{
this.RaisePropertyChanged("IsChanged");
}
};
}
}
}
clsChild:
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace C_Sharp_Tests
{
internal class clsChild : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
public string name { get; set; }
}
}
ItemObservableCollection:
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Countdown
{
public sealed class ItemObservableCollection<T> : ObservableCollection<T>
where T : INotifyPropertyChanged
{
public event EventHandler<ItemPropertyChangedEventArgs<T>> ItemPropertyChanged;
protected override void InsertItem(int index, T item)
{
base.InsertItem(index, item);
item.PropertyChanged += item_PropertyChanged;
}
protected override void RemoveItem(int index)
{
var item = this[index];
base.RemoveItem(index);
item.PropertyChanged -= item_PropertyChanged;
}
protected override void ClearItems()
{
foreach (var item in this)
{
item.PropertyChanged -= item_PropertyChanged;
}
base.ClearItems();
}
protected override void SetItem(int index, T item)
{
var oldItem = this[index];
oldItem.PropertyChanged -= item_PropertyChanged;
base.SetItem(index, item);
item.PropertyChanged += item_PropertyChanged;
}
private void item_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
OnItemPropertyChanged((T)sender, e.PropertyName);
}
private void OnItemPropertyChanged(T item, string propertyName)
{
var handler = this.ItemPropertyChanged;
if (handler != null)
{
handler(this, new ItemPropertyChangedEventArgs<T>(item, propertyName));
}
}
}
public sealed class ItemPropertyChangedEventArgs<T> : EventArgs
{
private readonly T _item;
private readonly string _propertyName;
public ItemPropertyChangedEventArgs(T item, string propertyName)
{
_item = item;
_propertyName = propertyName;
}
public T Item
{
get { return _item; }
}
public string PropertyName
{
get { return _propertyName; }
}
}
}
|
|
|
|
|
I don't see why you just didn't stick with ObservableCollection. It's concurrent for read only. You simply "subscribe / unsubscribe". As if you can predict what properties to refresh on the client side.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I've known about Visual Studio user secrets for a while but haven't used them. And I know that GitHub has secrets, which can be used in repos and I'm sure GH Actions, but again I haven't used them.
That's all going to change very soon. I'd like to use VS user secrets for connection strings, API keys, etc. But when I commit to a GH repo, I'd like the GH action to use the GH secrets saved in that repo on GH. The only thing is, how do I do that? The syntax, I'm sure, isn't the same. How do I do what I want to do?
Rod
|
|
|
|
|
|
Thank you, Richard. I'm reading the document you linked to now. I'm about a third of the way through it. It speaks of ASP.NET Core, which in my case is fine, as this is a new app. However, we have hundreds of custom apps all written using old versions of the .NET Framework. Does that mean that Visual Studio user secrets will not work?
Rod
|
|
|
|
|
Rod at work wrote: Does that mean that Visual Studio user secrets will not work? Sorry, I do not know the defoinitive answer, but there is a section on that page titled "Migrating User Secrets from ASP.NET Framework to ASP.NET Core", which may help.
|
|
|
|
|
I've finished reading the document and I did see that "*Migrating User Secrets from ASP.NET Framework to ASP.NET Core*". Unfortunately, that won't work for me. I wish we were allowed to update code, but the motto where I work is, "If the code isn't broken beyond repair, do NOT modify it!! And NEVER, EVER update or upgrade it!!!" I have no choice but to work on old code using whatever .NET Framework it was originally written in. And some of those dates back to .NET Framework 1.1.
Rod
|
|
|
|
|
Another issue I'm unclear on, in reference to that "Safe storage" document, is the use of environmental variables for safely storing secrets. The document brings up environmental variables, saying that is a safe way to use secrets securely, then drops talking about environmental variables. Please excuse my abysmal ignorance on how to use environmental variables on a development machine, on the deployed server and an Azure App Service. How does that work?
Rod
|
|
|
|
|
Sorry, I have not studied the paper in detail. I think you may need to ask Microsoft for clarification.
|
|
|
|
|
Rod at work wrote: The only thing is, how do I do that
By providing a key that only shows up on a developer machine. Such as a specific single env variable. It only exists on dev machines. The code uses different code based on whether that exists or not.
The env variable does not provide security information itself. It just exists.
That allows for no security problems because if it starts existing on a prod box nothing will work.
This solves your github problem.
Rod at work wrote: However, we have hundreds of custom apps...And NEVER, EVER update or upgrade it!!!
(From other posts)
Which is exactly correct. If you bring an app forward then that tech debt activity should make NO functional changes except those necessary to bring it forward. These days one can often make that case to do that for existing code both for obsolesce and security reasons.
But besides that presumably those apps are already managing secret information via some mechanism and you should not attempt to use another idiom unless there is a real need. It does not add value to have multiple idioms that one must know to provide maintenance and just to do development.
|
|
|
|
|