Study some Command Lines, Editor, Regular Expression (regex), and String Processing. There are too many “GNU/Linux” sites around, do your own GSGS
Operating System Computers keep getting faster and faster, and by the start of the 1950s they had gotten so fast that it often took longer to manually load programs via punch cards than to actually run them! The solution was the operating system (or OS), which is just a program with special privileges that allows it to run and manage other programs. So today, we’re going to trace the development of operating systems from the Multics and Atlas Supervisor to Unix and MS-DOS, and take a look at how these systems heavily influenced popular OSes like Linux, Windows, macOS, and Android that we use today.
History of Operating System Intermezzo: A crash course that give us a better understanding of operating system and how it works, recalling about its history and understand it comprehensively.
25 Years of Linux in 5 minutes and BSD, Linux, POSIX, UNIX Playlist (Video) UNIX is an operating systems invented by Dennis Ritchie and Ken Thompson at the Bell Labs Research Center. It was a commercial product sold by AT&T. Linux is a Kernel which was invented by Linus Torvalds at the University of Helsinki. A kernel is a program that always run, so it’s just a software and not as complete/full as UNIX. See Linux as an engine, and UNIX as a car. It’s more like a ripoff or clone of UNIX. It’s open source, unlike UNIX. For extra pepper, you can read What is the very fundamental difference between Unix, Linux, BSD and GNU? and watch Unix vs Linux Difference Between Linux and Unix Intellipaat. By then, macOS is still a descendant of UNIX. As for POSIX, it’s a standard command interface. You can read more at What is the meaning of POSIX?.
11 Reasons Why Linux Is Better Than Windows Windows is quite familiar among tech users since it is easy to use and not really complicated. Yet, if we are trying to see in “reliability and maintainability” perspective, Linux probably one of the best option. Here’s why: Linux is pretty light and rarely need device minimum requirements, as a result, users will not need to buy new hardware because of its incompatibility with the operating system.
How Operating Systems Work To make it simple, operating system has similar task as both translator and manager. It helps us translate binary into graphical interfaces as well as manages resources, programs scheduling, and virtualization. By deciding which program needs more resources, which program has to run first, which program could run concurrently, operating system surely helps us a lot.
Game Console Operating Systems We do know that our computer has its own operating system; Windows, MacOS, or maybe derivation of Linux such as Ubuntu and Debian. But have we ever thought about gaming console’s (such as Nintendo Switch, PS5, etc.) operating system?
Von Neumann Architecture A video explaining the Von Neumann Architecture and it’s history. The Von Neumann Architecture is a very simple computer architecture, that we largely still follow today. It was developed during the wartime period, and it consists of a storage medium (Memory), a control unit (CPU) an arithmetic unit (ALU), and an input and output.
Memory Hierarchy Design and its Characteristics In Computer System Design, Memory Hierarchy is an improvement to organize memory in such a way as to minimize access time. The memory hierarchy design in a computer system mainly includes different storage devices. Most of the computers were inbuilt with extra storage to run more powerfully beyond the main memory capacity. This website will discuss the types and characteristics of the memory hierarchy.
fzf FZF is a neat little tool that makes navigation through the command line much much easier. You could scout through folders to find a file, jump to the folder, or repeat a command from history. You could also use it to see through a git remote, logs, branches, etc. It’s really versatile.
tmux Tmux is a terminal multiplexer that can be attached and detached at any given time. It allows you to open multiple terminals and store it in a session.
Xargs Should Be In Your Command Line Toolbag There are a few tools that doesn’t take standard inputs such as echo, ls, etc. But there are cases when you want to take a standard input and pass it to a command that doesn’t take one. Xargs solves this problem, and this video will show you a little bit about how it works.
AWK is a command-line-based text-processing program which is very useful to extract useful information from raw data, especially from other programs. This video walks through some basic AWK use-cases and basic commands which are often used in real-life, including reated apps such as
Learning Awk Is Essential For Linux Users
Awk is a powerful tool for scripting, mostly used to to scan and process patterns. The video here serves as an introduction to awk by showing a few example of the common usage of awk.
SHA-256 Algorithm SHA-256(Secure Hash Algorithm) is an encryption algorithm used to secure data. While talking about security, hashing means converting data into a secure format, so that no one could access the data unless they have the key. 256 in SHA-256 means 256-bits long; or we can say that every data that is encrypted with SHA-256 hashing will be transformed into 256-bits string.
Password Manager Haunted by feelings of insecurity of your password manager is absolutely normal. It is a common for someone being skeptical when it comes to personal data and information. But few of us know that most of password manager use AES 256-bit encryption. It means, it is actually quite safe to use a password manager.
Pass - The Standard Unix Password Manager Password manager not only allows you to stop remembering passwords but they also stop you from typing those cumbersome letters. But which one should you use? Are the ones being advertise to you are safe? If care less about portability and more into security you could use the unix password manager known as pass. It works just as any password manager, the catch is the only way you could access the passwords is by providing the GPG key you initialize it with. Meaning unless you give it to anyone else your password is safe.
Which OS is More Secure: Windows, Linux, or macOS? Some of us might think that MacOS is better than WindowsOS in term of security, since statistically, WindowsOS is one of the most targeted. Actually, it is not a hundred percent true. WindowsOS being the most targeted because it is more popular and familiar than MacOS. And the fact that WindowsOS is on top of targeted list makes it improve its security system even better each time.
Asymmetric Encryption This article explains the general knowledge you need to know about Asymmetric Encryption. After reading it you should be able to know the pros and cons, also how to use asymmetric encryption.
Symmetric vs. Asymmetric Encryption: What’s the Difference? Symmetric encryption is a widely used data encryption technique whereby data is encrypted and decrypted using a single, secret cryptographic key. Asymmetric encryption or public-key cryptography or public-key encryption uses mathematically linked public-key and private-key pairs to encrypt and decrypt senders’ and recipients’ sensitive data.
Public Key and Private Key Public key and Private key are key pairs that are used for asymmetric encryption and decryption process. So when the private key encrypts data, the public key is used to decrypt the data. This is one of the reasons why the private key cannot be given to just anyone.
Cryptograph Explains how cryptography allows for the secure transfer of data online. This video explains 256-bit encryption, public and private keys, SSL & TLS and HTTPS.
Hardware Protection A computer system contains the hardware like processor, monitor, RAM, and many more. The operating system ensures that these devices can not directly accessible by the user. This website explain about the type if hardware protection such as CPU protection, Memory protection, and I/O protection.
Memory Protection Memory protection is a way to control memory access rights on a computer, and is a part of most modern instruction set architectures and operating systems. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug or malware within a process from affecting other processes, or the operating system itself.
Static vs. Shared Libraries This is a short video explaining the difference between static and shared libraries. The explanation is very simple and delivered in a very intuitive way, thus highly suitable for beginners in the topic like me.
Linux File System/Structure Explained! In linux, folders are arranged by similar names on each distribution. In the video, the file system and structure of every directory is explained. Some of them are similar even in UNIX based operating system like Mac OS. Even though Windows’s file system and structures are different. You can also learn more even how Windows file system works.
MBR vs GPT Which Should You Use? Previously, we would not have to store more than about 2TB in our personal computer; that is why it is totally fine to use MBR partition. But, nowadays, we surely need lots of space to store our data, some might need more than 3TB. It is kinda impossible to do so with MBR, since MBR only gives you 2TB of space, even though you have 4TB hard drive. So, it will be better to use GPT and boot it using UEFI instead of BIOS.
Chinese Magical Hard-Drive (Blog Posts + Explanation Video) It’s a public secret that some cheap Chinese products are just some bad rip-offs of good brands with some bad spells involved in it.. In the blog and video we can see some people bought Terabytes of MicroSD and Hard-Drive for a really cheap price. Turns out, the capacity shown is not true. This is caused by the forged metadata of FAT32 or exFAT, even worse the hardware metadata is also forged! The video and post is interesting because we all can see that even well known file systems got some flaws in it. Well not just that.. generally, external drives hardware actually got another second chip (microcontroller) that stores some extra metadata that is forged by the bad people in general.
Tree queries for O(nq) (10’ Article) If you’re into competitive programming, this is a total good read. I will explain to you first why is this interesting. So, In C++ (It’s really similar to C). Usually in 1 second you can do like approximately 10e8 processes (Competitive programming approximations). But, somehow and sometimes. The compiler can optimize your code, those bad constants can be put down to really small. So, with the advantages of cache hit you can fit up to 10e10 processes! How? Essentialy, loop through optimal pages, pages that are already in the physical memory, rather than going through tons and tons of page fault.
Clustered File System When servers are bundled into a single cluster, it will be a tough job for an operating system managing the data retrieval. By utilizing clustered file system, an operating system could easily manage data retrieval since it has plenty of alternate nodes. It will be easier as well to share data between every user.
Why We Need Virtual File System Simply put, without Virtual File System, the creation of Linux is impossible. The very definition of VFS is that “Everything is a File”. With VFS, the linux kernel can operate, use, and deal with basically everything that exists as a filesystem.
File Allocation Methods We save our files in our computer and that’s it; we let our operating system manage the file allocation and resources which are necessary to keep our files. But how do an operating system manages our files? Basically, it treats our files as blocks. The bigger it is, the more block it will need. There are several file allocation methods; on of it is linked-list allocation, where those blocks are scattered yet connected through a linking “string” that makes it looks like a list.
How Do Linux File Permissions Work? Linux has simple system of file permission and it’s really easy to understand it. Basically, each user has a relationship with a file or a directory on whether it can see, write or execute it. Besides that, we can also set group permissions which makes linux one of the best operating system to handle permissions.
FUSE Fuse is a userspace filesystem that have kernel module, userpsace library and a mount utiity. Fuse allowed non-privilaged mount.
Building a Fuse Filesystem FUSE(Filesystem USErspace) is a file system framework that allows any user to create his/her own folder without modifying the kernel code. FUSE is specified for Linux distribution that supports FUSE kernel module. Through this link, you may be able to build FUSE by utilizing Libfuse.
Logical vs Physical Address An address generated by the CPU is known as a logical address, which the memory management unit (MMU) translates to a physical address in memory. It is important to compare the process size with the physical address space, which must be less than physical address space. The javapoint web is a good resource to learn memory management topic.
Understanding Big and Little Endian Byte Order Big endian is Stores data big-end first. When looking at multiple bytes, the first byte (lowest address) is the biggest. Little endian is stores data little-end first. When looking at multiple bytes, the first byte is smallest. Selain menjelaskan tentang pengertian big dan little endian di website ini juga menjelaskan perbedaan antara data dengan numbers. Pada website ini penjelasan disertai dengan contoh sehingga mudah dipahami.
Bi-Endianness? Apparently, some machines have the ability to switch between big endian and little endian ordering. Hence why it’s called a bi-endian. Mindblowing. It’s almost like being ambidextrous but for computers.
Paging Modern operating systems use paging to manage memory. In this process, physical memory is divided into fixed-sized blocks called frames and logical memory into blocks of the same size called pages. When paging is used, a logical address is divided into two parts: a page number and a page offset. The page number serves as an index into a perprocess page table that contains the frame in physical memory that holds the page. The offset is the specific location in the frame being referenced.
Paging and Segmentation Paging is a memory management technique in which process address space is broken into blocks of the same size called pages while segmentation is a memory management technique in which each job is divided into several segments of different sizes. Paging and segmentation are processes by which data is stored, then retrieved from the computer’s storage disk. This website will dig deeper into the meaning, process, and key differences between paging and segmentation.
Translation Look Aside Buffer A translation look-aside buffer (TLB) is a hardware cache of the page table. Each TLB entry contains a page number and its corresponding frame. Using a TLB in address translation for paging systems involves obtaining the page number from the logical address and checking if the frame for the page is in the TLB. If it is, the frame is obtained from the TLB. If the frame is not present in the TLB, it must be retrieved from the page table.
Translation Lookaside Buffer (TLB) in Paging In Operating System, for each process page table will be created, which will contain Page Table Entry (PTE). This PTE will contain information like frame number (The address of main memory where we want to refer), and some other useful bits (e.g., valid/invalid bit, dirty bit, protection bit etc).
How To Optimize The Paging File In Windows If Windows ever warned you that your system is low on virtual memory, it is worth knowing how to manage and increase Pagefile (virtual memory) in Windows 10. The Windows Page File, also called a paging file or swap file, is a file used to temporarily store data. This website will discuss tips to improve paging performance and why we should optimize the paging file.
Memory in OS In many ways, our memories make us who we are, helping us remember our past, learn and retain skills, and plan for the future. And for the computers that often act as extensions of ourselves, memory plays much the same role. Kanawat Senanan explains how computer memory works.
Memory Hierarchy The memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. The highest is processor register, the middle is RAM, and the lower are hard drives and tape backup.
Memory vs Storage and Its History From the POK course we know that computer memory is volatile. There is also storage, any data like hard drive will be there until deleted or overwritten, even if the power goes down. The crash course video I linked above explains the history of memory and storage from before 1950 up until now. What we utilize affordably and easily for now is the result of re-inventing and trial and errors! It talks about delay line memory, stored-program computer (Edvac) or sequential memory, magnetic core memory, MIT’s whirlwind computer (predominant RAM at the time) which can access any data unlike delay memory, solid state drive -> no moving part like optical disk or compact disk, but slower than computer RAM. That’s why comp still use memory hierarchy.
vRAM vs RAM: What’s The Difference? RAM is used for storing temporary file systems, but vRAM stores image data. Most of the time More RAM means better, but for vRAM, it isn’t always the case. Why? Because it also depends on the graphic card, sometimes a graphic card boasting more vRAM will perform worse than a lower vRAM because of the superior chips, bandwidths, and wider memory bus (Try reading the difference between RTX 3060 and RTX 3060Ti).
DIMM vs SO-DIMM: Characteristics, Definition and Differences When we talk about RAM oriented to the consumer (we, the users), we can find it in two different formats: either in DIMM format, which is the usual “full size” that we see on desktop PCs, or in the SO- format. DIMM, smaller and oriented to laptops and mini PCs.
x86 VS x64 Architecture x86 is referring to 32-bits architecture, while x64 is referring to 64-bits architecture. Well, what’s so special about getting twice more than the predecessor machine? The more bits it has, the more it can optimize the usage of your memory (RAM). In other words, x64 architecture could run program faster than x86 architecture.
How To Check RAM on Linux (3’ Article) Wonder how to check how much RAM can you use? Well in Windows you can simply just open the property of your My PC. But how to check it through terminal? simply do free -th, or just check it using top. But how to check the CPU Speed through Linux Terminal? There are also lots of ways. Simply do lscpu. It’s so interesting because this site will show you how to check your basic environment hardware you’re working with. Or simply just check the specifications of the server you’re give.
Simple Chaching Caches are used everywhere in our modern devices. It’s found in many hardware components and throughout software. The goal of caching is to store data from slow memory into fast memory so it can be retrieved quicker.
Caching Overview Why does our browser consume pretty much memory but once we restart it, its memory consumption decrease? The reason behind it is caching. Some of our programs are designed to be able to caching. Why do we need it? Caching is quite important to speed up running time since the data are stored in cache (top layer memory), so that we do not have to access lower layer memory such as hard drive and solid state drive.
Cache Memory in Computer Organization Cache Memory is a small memory that is temporary. The data or contents of the main memory that are used frequently by CPU are stored in the cache memory so that the processor can easily access that data in a shorter time. Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found in cache memory, CPU will move into the main memory. On this website will be discussed about the memory level, type, performance, and mapping of the cache.
Memory Management in OS: Contiguous, Swapping, Fragmentation The article explains about memory management techniques, an explanation about swapping, explanation about memory allocation, paging, fragmentation, segmentation dynamic loading and linking with differences with the static version.
Address Binding and Its Types This site cover address binding with the help of an example and Its types like compile time, load time, and execution time address binding.
Thrashing As the article says, we need to understand what is page fault and swapping. Those two are correlated with the thrashing mechanism. Thrashing is when the page fault and then swapping happening very frequently at a higher rate, the operating system has to spend more time swapping these pages.
Techniques to Handle Thrashing
Thrashing is a condition or situation when the system spends most of its time fixing page faults, but the actual processing is done is very negligible. It occurs when there are too many pages in the memory and each page refers to another one. On this website, we will learn the basic concept, locality model, and what techniques are needed to handle Trashing in depth.
What is Multicore Programming A multicore processor system is essentially a single processor with multiple execution cores in one chip. Multicore programming is, well, how to program stuff that uses the multicore processor system effectively.
Process and Threads in OS In this video, difference between Process and threads explained with real life examples. students always feel confused in this topic but after watching this video you will be able to solve the questions easily. Fork questions are asked in competitive exams like GATE, NTA NET, NIELIT, DSSSB tgt/ pgt computer science, KVS CSE, PSUs etc and college university exams also.
Mulithreading Overview Multithreading is a technique used by computers to make sure that system resources are utilized to it’s best efficiency, by executing multiple threads or instructions at a time. This allows programs that contain multiple threads to share resources, such as memory.
Concurrency vs. Parallelism — A brief view Concurrency means that an application is making progress on more than one task at the same time (concurrently) while parallelism means that an application splits its tasks up into smaller subtasks that can be processed in parallel, for instance on multiple CPUs at the exact same time.
Concurrency Is Not Parallelism As the title says, those two are completely different, while somewhat similar to an extent. This video will help you further understand, differentiate and learn about both concepts.
Single-threaded and Multi-threaded Processes Single-threaded processes contain the execution of instructions in a single sequence. In other words, one command is processed at a time. The opposite of single-threaded processes is multithreaded processes. These processes allow the execution of multiple parts of a program at the same time. These are lightweight processes available within the process.
Context Switching Context switching is a way to implement computational multitasking, by saving the execution state of a thread while another thread is being executed. By this technique, it is possible for a single CPU to execute multiple threads at once.
User Level vs Kernel Level Thread in Tabular Form The Key difference between User Level and Kernel Level Threads is that User Level Threads are managed by the User whereas Kernel Level Threads are managed by the Operating System. This website will help you to find out more about the differences between User Level and Kernel Level Threads. In addition, a comparison chart, concept, and advantages and disadvantages between the two will also be presented in here.
All You Need To Know About Processes in Linux [Comprehensive Guide] (3’ Article) A process refers to a program in execution; it’s a running instance of a program. A new process is normally created when an existing process makes an exact copy of itself in memory. The child process will have the same environment as its parent 🤼, but only the process ID number is different. Init process is the mother (parent) of all processes on the system, it’s the first program that is executed when the Linux system boots up; it manages all other processes on the system. It is started by the kernel itself, so in principle it does not have a parent process.
Daemon Daemon is a service process that runs on background and manages background process.
init system init system is the first process that started after the kernel started. Init system will run other needed process that the kernel needed.
systemd systemd is a init system that used in many of linux distro. Systemd created by fedora and used in debian, ubuntu, arch and other distro.
OpenRC An alternative systemd, considered to be the goto if you considered systemd to be bloated. There’s not much difference between the two.
fork() in C
The fork system call is one of the important topics that you should know in the Operating system subject. Fork system call is used for creating a new process, which is called child process, which runs concurrently with the process that makes the
fork() call (parent process). Visit this page to read more explanations about
fork() in C.
Fork Bomb is a program that harms a system by making it run out of memory. It forks processes infinitely to fill memory. The fork bomb is a form of denial-of-service (DoS) attack against a Linux-based system. This page has the explanation for
fork() bomb script and more detailed information about
Zombie process is a process is a process that completed but still have entry in the process table. Zombie process usually occurred for child process.
Process Synchronization in Operating System Basic ideas behind most synchronization: If two threads, processes, interrupt handlers, etc. are going to have conflicting accesses, force one of them to wait until it is safe to proceed. Synchronization problems are: (1) synchronization can be required for different resources (2) there are different kinds of synchronization problems (3) synchronization may be across machines (4) sometimes it’s not OK to block a thread or process.
Hacking Banks With Race Conditions Concurrency surely brings advantages for our program, since we able to run it asynchronously. But if we are not careful enough, it could makes our program become vulnerable, especially in terms of security. Imagine, we are going to do some money transaction; our security program has not been fully executed, yet our money transaction program already done its job.
Bounded Buffer Problem Also called as the “producer-consumer problem”, it is a demonstration of problems that might come from the implementation of multithreading on a process. It describes a producer, a consumer, and a buffer of finite size. The producer has to put a “product” in the buffer, for the consumer to “consume” it. The problem arises when the buffer is either full or empty. If it is full, the producer will “waste” it’s product. Likewise, the consumer will waste a cycle “waiting” for a product that isn’t there. The solution to this problem is by making “counters” that check how many empty spaces a buffer has, and another one that counts how many full spaces a buffer has. That way, when the buffer is empty/full, the consumer/producer can sleep for that cycle, and free up memory
Starvation and Aging in Processes Priority Operating systems surely will allocate our computer resources to processes with higher priorities. Imagine it as a queue, they who have higher priority level will be served first, then the lower one. But what if, while queuing, processes with high priority keep coming? Then we guess that those with lower priority will never be served.
Deadlock And Its Prevention And Avoidance
Deadlock is a condition when a set (or more) processes are not able to complete because each processes are holding resources that is needed by the other processes. At this situation, it will end up as a circular wait. As an illustration, there are two goats on a bridge. Each of them want to cross the bridge and end up meeting each other at the center of the bridge. None of them is willing to step back, letting the other pass. As a result, a deadlock occurs.
CPU Scheduling in Operating Systems A typical process involves both I/O time and CPU time. In a uni programming system like MS-DOS, time spent waiting for I/O is wasted and the CPU is free during this time. In multiprogramming systems, one process can use a CPU while another is waiting for I/O. This is possible only with process scheduling.
Deadline scheduler in Operating System
Deadline Scheduler is n I/O scheduler for the Linux kernel and guarantees a start service time for a request. Deadline Scheduler imposes deadlines on all I/O operations in order to prevent wanted requests. Two deadlines read and write queues (basically sorted by their deadline) are maintained. For every new request, the scheduler selects which queue will serve for it. Read queues are given high priority than write queues because during reading operations the processes usually get blocked.
What is Linux swap? Swap space is basically the way out if your RAM run out of memory. Basically the Linux Kernel will take some information from the RAM and will take the information to the swap space to free some of it and doesn’t crash due to lack of memory. It’s just like an additional memory for your operating systems 📝. Well, the golden rule is the Swap Area is usually twice the size of your RAM, but it also depends on the ability for your computer to hibernate. Also, it depends on the speed of your hard drive, some say that it’s not recommended to have large swap space if you use SSD as it will decrease it write cycle and life span. So, Yeah! You should read some of those articles about swap space.
A page file is a file (duh) where the operating system saves data that is not recently used/of a lower priority from a higher level storage to a lower one (typically from the RAM to disk). This allows the system to have a theoretically larger memory than what is physically available. In modern consumer systems with large memory, it is often not required.
Socket Programming A socket is a communications connection point (endpoint) that you can name and address in a network. Socket programming shows how to use socket APIs to establish communication links between remote and local processes.
Socket Programming (Video) We have many machines and we have all nodes, it can be a server or a client it can be a client to client network, we all this node talking to each other, Basically it is pair to pair network. In this video we will understand base of the internet or network, Which is socket. And we have to understand two concept first one is Port numbers and second one is type of connection going to build. We will talk about TCP(Transmission Control Protocol) & UDP( User Datagram Protocol). Tcp is connection oriented network, Udp is Connection less network.
Five Pitfalls of Linux Sockets Programming Socket programming can be ugly at scale, especially if you’re new at socket programming. It’s better to keep in mind common pitfalls that can potentially happen. This page covers 5 common pitfalls in C socket API as well as tools/methods to debug socket codes.
A Basic Guide to Linux This page explains some fundamental things that we should know when we are going to switch to Linux as our operating system. Starts with what is Linux, how to install it, and basic things to know about Linux.
Htop Replace Default
Top Monitoring Tool in Linux?
Top is a traditional command-line tool for monitoring real-time processes in a Unix/Linux systems, it’s comes preinstalled on most if not all Linux distributions and shows a useful summary of system information including uptime, the total number of processes (and number of: running, sleeping, stopped and zombie processes), CPU and RAM usage, and a list of processes or threads currently being managed by the kernel.