Locking Executing Files: Windows does, Linux doesn’t. Why?

I noticed when a file is executed on Windows (.exe or .dll), it is locked and cannot be deleted, moved or modified.

Linux, on the other hand, does not lock executing files and you can delete, move, or modify them.

Why does Windows lock when Linux does not? Is there an advantage to locking?

C File Locking behavior on windows and Linux

I was looking into below examples for understanding files locking on windows and linux. The program 1 is working on both windows and linux with gcc. But the second one is only working on Linux. Espec

Does Windows have Inode Numbers like Linux?

Does Windows have Inode Numbers like Linux? How does Windows internally manage files?

Why does Windows command prompt stall until a key is pressed when executing long batch files?

When I am running a long batch file,the command prompt hangs. But,when press enter,it continues without any problem. Why does this happen? And is there any way to keep this from happening?

Locking files in windows

I am working on some legacy code which opens a file and adds binary data to the file: std::ifstream mInFile; #ifdef WINDOWS miWindowsFileHandle = _sopen(filename.c_str(), O_RDONLY , SH_DENYWR, S_IREA

why windows application does not work in linux or mac?

I read the SO question and I am still in doubt – Why an executable program for a specific CPU does not work on Linux and Windows? One of the replies says that Every OS requires the binaries to confor

Hide the Linux Hidden files in windows

Is it possible to hide the Linux hidden files (.* files ) in Windows. Does explicitly hiding the files by changing the file properties in windows have any effect in Linux.. ?? Thanks Kiran

problem while executing script files in linux

I am trying to create installer for our application using izpack, i am facing problem while executing script file in linux.I am able to run bat files through process panel. please reply

Record locking problem between linux and Windows

I need to run a bunch of old DOS FoxPro / Clipper applications in linux under DOSEMU. The programs access their databases located on a network server (could be a Windows or Linux server) Actually, t

Why doesnt Multi Args in constructor work under linux?

For my exception class i have a constructor that has multi arguments (…) which works fine under windows, how ever, under linux it compiles fine but refuses to link to it. Why does this not work und

Running Zend Framework as CLI doesnt work in linux but does on Windows

I need to be able to run my application through cron on a linux ( debian machine ) As such i have created a cron.php which only loads the relevant bootstrap stuff i need i also force the default modul

Answers

I think linux / unix doesn’t use the same locking mechanics because they are built from the ground up as a multi-user system – which would expect the possibility of multiple users using the same file, maybe even for different purposes.

Is there an advantage to locking? Well, it could possibly reduce the amount of pointers that the OS would have to manage, but now a days the amount of savings is pretty negligible. The biggest advantage I can think of to locking is this: you save some user-viewable ambiguity. If user a is running a binary file, and user b deletes it, then the actual file has to stick around until user A’s process completes. Yet, if User B or any other users look on the file system for it, they won’t be able to find it – but it will continue to take up space. Not really a huge concern to me.

I think largely it’s more of a question on backwards compatibility with window’s file systems.

Linux has a reference-count mechanism, so you can delete the file while it is executing, and it will continue to exist as long as some process (Which previously opened it) has an open handle for it. The directory entry for the file is removed when you delete it, so it cannot be opened any more, but processes already using this file can still use it. Once all processes using this file terminate, the file is deleted automatically.

Windows does not have this capability, so it is forced to lock the file until all processes executing from it have finished.

I believe that the Linux behavior is preferable. There are probably some deep architectural reasons, but the prime (and simple) reason I find most compelling is that in Windows, you sometimes cannot delete a file, you have no idea why, and all you know is that some process is keeping it in use. In Linux it never happens.

As far as I know, linux does lock executables when they’re running — however, it locks the inode. This means that you can delete the “file” but the inode is still on the filesystem, untouched and all you really deleted is a link.

Unix programs use this way of thinking about the filesystem all the time, create a temporary file, open it, delete the name. Your file still exists but the name is freed up for others to use and no one else can see it.

NT variants have the

openfiles

command, which will show which processes have handles on which files. It does, however, require enabling the system global flag ‘maintain objects list’

openfiles /local /?

tells you how to do this, and also that a performance penalty is incurred by doing so.

I think you’re too absolute about Windows. Normally, it doesn’t allocate swap space for the code part of an executable. Instead, it keeps a lock on the excutable & DLLs. If discarded code pages are needed again, they’re simply reloaded. But with /SWAPRUN, these pages are kept in swap. This is used for executables on CD or network drives. Hence, windows doesn’t need to lock these files.

For .NET, look at Shadow Copy.

Linux does lock the files. If you try to overwrite a file that’s executing you will get “ETXTBUSY” (Text file busy). You can however remove the file, and the kernel will delete the file when the last reference to it is removed. (If the machine wasn’t cleanly shutdown, these files are the cause of the “Deleted inode had zero d-time” messages when the filesystem is checked, they weren’t fully deleted, because a running process had a reference to them, and now they are.)

This has some major advantages, you can upgrade a process thats running, by deleting the executable, replacing it, then restarting the process. Even init can be upgraded like this, replace the executable, and send it a signal, and it’ll re-exec() itself, without requiring a reboot. (THis is normally done automatically by your package management system as part of it’s upgrade)

Under windows, replacing a file that’s in use appears to be a major hassle, generally requiring a reboot to make sure no processes are running.

There can be some problems, such as if you have an extremely large logfile, and you remove it, but forget to tell the process that was logging to that file to reopen the file, it’ll hold the reference, and you’ll wonder why your disk didn’t suddenly get a lot more free space.

You can also use this trick under linux for temporary files. open the file, delete it, then continue to use the file. When your process exits (for no matter what reason — even power failure), the file will be deleted.

Programs like lsof and fuser (or just poking around in /proc//fd) can show you what processes have files open that no longer have a name.

Executables are progressively mapped to memory when run. What that means is that portions of the executable are loaded as needed. If the file is swapped out prior to all sections being mapped, it could cause major instability.

If executed code in a file should be locked or not is a design decision and MS simply decided to lock, because it has clear advantages in practice: That way you don’t need to know which code in which version is used by which application. This is a major problem with Linux default behaviour, which is simply ignored by most people. If system wide libs are replaced you can’t easily know which apps use code of such libs, most of the times the best you can get is that the package manager knows some users of those libs and restarts them. But that only works for general and well know things like maybe Postgres and its libs or such. The more interesting scenarios are if you develop your own application against some 3rd libs and those get replaced, because most of the times the package manager simply doesn’t know your app. And that’s not only a problem of native C code or such, it can happen with almost everything: Just use httpd with mod_perl and some Perl libs installed using a package manager and let the package manager update those Perl libs because of any reason. It won’t restart your httpd, simply because it doesn’t know the dependencies. There are plenty of examples like this one, simply because any file can potentially contain code in use in memory by any runtime, think of Java, Python and all such things.

So there’s a good reason to have the opinion that locking files by default may be a good choice. You don’t need to agree with that reasons, though.

So what did MS do? They simply created an API which gives the calling application the chance to decide if files should be locked or not, but they decided that the default value of this API is to provide an exclusive lock to the first calling application. Have a look at the API around CreateFile and its dwShareMode argument. That is the reason why you might not be able to delete files in use by any application, it simply doesn’t care about your use case, used the default values and therefore got an exclusive lock by Windows for a file.

Please don’t believe in people telling you something about Windows doesn’t use ref counting on HANDLEs or doesn’t support Hardlinks or such, that is completely wrong. Almost every API using HANDLEs documents its behaviour regarding ref counting and you can easily read in almost any article about NTFS that it in deed does support Hardlinks and always did, since Windows Vista it has support for Symlinks as well and the Support for Hardlinks has been improved by providing APIs to read all hard links for a given file and such.

Additionals, you may simply want to have a look at the structures used to describe a file in e.g. Ext4 compared to those of NTFS, which have a lot in common. Both work the concepts of extents, which separates data form attributes like file name, and inodes are pretty much just another name for that. Even Wikipedia lists both file systems in its article.

There’s really a lot of FUD around file locking in Windows compared to other OS on the net, just like about defragmentation. 😉 Some of this FUD can be ruled out by simply reading a bit on the Wikipedia as well.