Hello love! This one's for you. I'm gonna start writing a collection of lessons on how to program, just for you (cause I love you, and you wanted to learn anyways <3).
I guess at some in point in time, I have to start at the very beginning. The very, very beginning, I have an article on the history of computing floating around somewhere, so I'll leave that part out, and skip to "what is programming for?"
In short, developing software, AKA computer programming, is a way of commanding a computing device (desktop/laptop, mobile phone, tablet, heavy machinery, etc.) to perform a series repeatable tasks. Simple enough, right? That being said, computers are relatively stupid; they cannot act for themselves, and can only do exactly what you tell them to do, meaning if they do something wrong, it is because they were programmed to do it the wrong way.
Computers are instructed, or programmed, using programming languages. A programming language is exactly that, a language. This language can be translated into something called machine language, which can then be understood by the computer and used to command it. As of right now, there are literally thousands of programming languages that can be used for programming. Some of these languages are mathematical and look very cryptic (example: C and C++), while others and very similar to English, and can be read as such (example: Visual Basic). If you want to see what some of these languages look like, check out the 99 Bottles of Beer website. They have the song 99 bottles of beer written in 1500 different programming languages!
With there being literally thousands of programming languages that are used for different purposes, it would be helpful to start breaking them all down into categories. The big three categories are:
The programming language category can then be split into three more subcategories:
Let's take a look at an example of some of these languages. Let's say we have a variable x, and we want to make it equal to -1. In a high-level programming language, this could look like:
x = -1;
If you were to look at a low-level language like assembly, you could probably see something like (at the very guts of it, there would be a lot more work to do to get to this point):
MOV ECX, -1
Cryptic, right? Guess what the computer itself sees? Something along the lines of:
in hexadecimal. Or, much more accurately (in binary):
What the hell is up with all of these weird numbers you ask?
This might be a bit of a tangent, but it is good to know. Humans (yourself, and possibly me to) use a Base-10 "Decimal" numbering system for everything, meaning that our numbers go from 0-9. Computers, on the other hand, use a Base-2 "Binary" numbering system, meaning the only numbers that exist are 0 and 1.
Why? Because a computer (a digital one at least), can only ever be turned on, or turned off. There is no in between. That being said, you've seen this before (it's literally tattooed on my arm):
This is the international symbol for the power button, and it is made by taking the number 1 and laying it over the top of the number 0, to represent "on" and "off". Yay for symbols!
Back on track. Computers only understand binary, and humans are lazy. As an example, to represent the letter 'A' (capital A), the decimal number 65 is used. To a computer, 65 is really 01000001 in binary. But who wants to decode that? When talking about bits and bytes (FYI: a "bit" is one binary digit, and can be 0 or 1, and a byte is a group of 8 bits, which can represent any decimal number between 0-255), programmers use the hexadecimal numbering system, which allows us to write the same number in a shorter way. For this example, the decimal 65 (binary 01000001) can be written in hexadecimal as "41". Don't worry about figuring out how to translate between the different number systems yet; an advanced topic for another time. There is a handy website here that will show you these conversions, if you're interested.
So why do computers only understand binary? That's simple. A computer is just a machine that runs off electricity. Electricity (in it's simplest form) can be turned on or turned off, just like a light switch. A computer itself is just a collection of millions of light switches wired together. Since these switches can be on or off, we can represent them with two numbers, 0 for off, and 1 for on. When you write code for a computer, all you are doing is telling the computer which switches to turn on and off, and when to do it.