|
|
I guess you could take the view that if it isn't important just overwrite everything.
But if is is important you should have a back up anyways so another drive, just for that reason, makes sense. And then less risky if there is a problem (and you end up needing a complete re-install anyways.)
megaadam wrote: I have 0.6 TB of data,
And I just checked Amazon and first 1 TB usb drive is only $40 but 4th one down is only $20.
|
|
|
|
|
Preface, I am not an expert on ext4 at all. But in general, shrinking a file system is a bit riskier than expanding one, but in practice not much more risky than defragmenting. Still, as mentioned, if your data is important, it should be backed up. If it's not, then the next time you install Linux consider having a separate partition for your user files that will remain if even if you wipe the OS one for a reinstall.
Jeremy Falcon
|
|
|
|
|
Message Closed
modified 15-May-23 19:06pm.
|
|
|
|
|
Member 14968771 wrote: Works as expected.
It is a pain to do this each time I rebuild / develop my executable....
Does Linux not allow you to specify which executables to run on startup of the OS?
Have your automated builds add a copy there and done?
Perhaps I misunderstand your question. Explain more please.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Member 14968771 wrote: It is a pain to do this each time I rebuild / develop my executable....
Technically because that is not part of the build.
But how do you build? Every build system I have seen allows post compilation steps.
If you are just manually building source code then maybe time to look at existing solutions that already provide framework.
Or just create a script file that does the build and then runs the command.
|
|
|
|
|
Message Closed
modified 15-May-23 19:06pm.
|
|
|
|
|
|
Message Closed
modified 15-May-23 19:06pm.
|
|
|
|
|
Then you need a backup system that can clone the entire drive.
|
|
|
|
|
Message Closed
modified 15-May-23 19:06pm.
|
|
|
|
|
No, but to recover that partition if it gets destroyed would need a backup/restore application that is self bootable. Hence the Google search I posted; you will need to check the various products to see if this feature is available.
|
|
|
|
|
Member 14968771 wrote: I do not need to backup my data, but I like to have a backup of my OS.
First question you have to ask yourself is why?
If you're keen to be able to get your system going again after a crash, it can be done. But literally every last thing you do not in userland will need a new backup if you so much as sneeze. If it's a new system, the backup is useless (for the most part). If it's the same system, this is exactly what disk mirroring with RAID was created for. No reason to reinvent the wheel.
That being said, tar was made for this sorta thing. Just don't go backing up /dev and /proc . You can backup /tmp , but it's pointless.
But, you'd be much better of just creating a post-install shell script to recover from a crash with maybe a /etc tarball.
Jeremy Falcon
|
|
|
|
|
Installed a second NAS-class disk in my home server, replacing a motley assortment of consumer grade drives, one of which died recently. RAID resync is almost complete
Another useful tip:
I set up a cron job to do a "dpkg -l" and a "snap list" to a user-space file, which is then included in my daily user backups.
So if I need to rebuild or replace a machine, I don't have to remember all the packages I've downloaded over time.
A quick file comparison tells me what I missed. (Meld is my weapon of choice, btw.)
As well as user space all over, I also back up /etc and /var from the server.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
|
Message Closed
modified 15-May-23 19:06pm.
|
|
|
|
|
Reboot in single user mode. Get your files backed up. If that doesn't work, take the drive out of the system and get the files off it from a new system.
If it's corrupt, that sucks. Take it data recovery if you don't have backups.
Check fdisk , check fstab , or reinstall. Try not to bork your system again.
Jeremy Falcon
|
|
|
|
|
Hi,
I would like to ask the user to enter a command that will be executed, but I don't understand why I can't assign two commands to the same variable.
Here is the code:
user@localhost:~# read -ep "command: " cmd ; "$cmd"
And result:
command: id ; date
-bash: id ; date : command not found
but if I type a single command, it works.
Thanks for your help
|
|
|
|
|
When the user types in id; date , that's the value of the $cmd variable. So when you want to execute the command, its as if you had typed "id; date" at the command line, including the quote marks. So the bash interpreter is looking for a command named id; date . Its not treating this as "subscript" to be parsed and executed. For that you'll need to use eval
[k5054@localhost ~]$ cmd="id; date"
[k5054@localhost ~]$ $cmd
-bash: id;: command not found
[k5054@localhost ~]$ eval $cmd
uid=1002(k5054) gid=1002(k5054) groups=1002(k5054)
Mon 17 Oct 2022 02:37:19 PM UTC
[k5054@localhost ~]$
Keep Calm and Carry On
|
|
|
|
|
I spent an hour earlier trying to figure out a way to do this. I need get my Unix books out of storage.
|
|
|
|
|
Message Closed
modified 15-May-23 19:06pm.
|
|
|
|
|
|
I am developing a website that uses javascript, php and jQuery.
One of the screens executes a php that connects to the IP of another PC that
is on the same network as the PC with the web server.
However, this PHP fails because it doesn't have access to the other PC.
Both PCs have Debian 11 installed.
Seeing that in any of them to ping the other, it is necessary to do it with sudo, I thought that the problem may be in that it is necessary to add the web user (www-data) to the netdev group, like so I have executed the command:
$ sudo adduser www-data netdev
Adding user `www-data' to group `netdev' ...
Adding user www-data to group netdev
Done.
But the php is still unable to access the other PC.
Should I configure something else?
Any comment or suggestion is welcome.
|
|
|
|
|
Check the firewall settings on the destination PC. You may be blocking the port on the destination for incoming connections. There's an article that may help here: IBM Documentation
Member 15796760 wrote: Seeing that in any of them to ping the other, it is necessary to do it with sudo I'm not sure exactly what that means. You should be able to ping a (reachable) host without needing sudo. For example, assuming that you have your resolver and gateways correctly configured you should be able to
ping 8.8.8.8
ping google.com
ping 192.168.100.100
etc
Keep Calm and Carry On
|
|
|
|
|
This is a php that runs on PC1 and calls another php on PC2, where the IP of PC1 and PC2 are known.
The call made by the php on PC1 is as follows:
$file_headers = @get_headers(http:
How can I know which port would need to be opened on PC2?
|
|
|
|