Here are some questions I've heard or wouldn't be surprised to hear asked in interviews for new college graduate positions for electrical engineers in America.
You're given two arrays, SDA and SCL. The arrays are the same length and contain only 0s and 1s. SCL is a clock signal, and SDA is a data signal. Retrieve the value of SDA at each index where SCL transitions from 0 to 1 and store it in a new array called DATA_BITS. Use any language you like.
Convert DATA, an array of bits, into DATA_BYTES, an array of bytes. Again, use any language you like.
Given R, V_b, two voltage measurements, and the length of time between those two measurements, calculate C.
I have a sheet of paper. I fold the sheet of paper in half. The ratio of the sides of the paper remain the same after folding it. What is the ratio of the sides of the sheet of paper?
You have a 10kHz clock. How can you generate a 5kHz clock from this signal? How can you generate a 2kHz clock from this signal?
Given the integer variables a and b, how do you swap their values...
How can you tell Norton and Thevenin circuits apart when they are in a black box?
Leave a comment below with your questions and answers for these topics.
Photo by Seattle Municipal Archives
This post explains the math behind a very specific situation: charging a capacitor from 0 V to a fixed voltage through a resistor of known value. The time at which the capacitor begins charging is recent but unknown. By taking two voltage measurements a known amount of time apart while the capacitor is still charging, you can calculate the value of the capacitor.
The bus capacitance for I2C is generally specified to be 300 pF maximum. This capacitance does not come from a discrete capacitor but rather from the capacitance of the traces, wires, and cables used to connect devices in the bus. How can we measure this capacitance in-place? Luckily, this bus has a pull-up resistor of a known value. We can use the knowledge of the resistor's value, the I2C voltage we're using, and some math to calculate the bus capacitance.
The equation for a charging capacitor is
V_O(t) = V_S*(1-e^(-t/(R*C)))
V_O(t) = V_S*(1-e^(-t/(R*C)))
where V_O is the capacitor's voltage, V_S is the source voltage charging the capacitor, R is the resistor value, C is the capacitor value, and the capacitor is assumed to be initially discharged.
With the substitutions tau=R*C and P=V_O/V_S, we can simplify this equation to
Note that P(t) ranges from 0 (the capacitor is fully discharged) to 1 (the capacitor is fully charged).
We can rearrange this to solve for t:
If we take two measurements along the curve, we no have a system of equations:
where t_1 and t_2 are relative to when we started charging the capacitor and P_1 and P_2 is the capacitor's charged percent at those times. (0 < P_1 < P_2 < 1)
Let delta_t=t_2-t_1 and combine the equations above to get
Solve for C:
Photo by Windell Oskay
Error correction coding or "coding theory" is the means by which data transmitted over a noisy medium is recovered at the receiver. ECC works by adding redundant bits to the data being transmitted in an intelligent manner. There are many different kinds of error correction codes that offer different levels of error correction and require more or less computational power.
Error correction codes operate on the different levels: error detection, error correction, and erasure correction. The most basic error correction codes offer only error detection. More advanced coding scheme can correct higher numbers of bit errors and bit erasures.
Bit errors are when the receiver hardware incorrectly translates a received value. Bit erasures are when the receiver hardware marks a bit as erased when it is not confident of the bit's intended value. For example, suppose a received value of -1 represents a 0 bit and a received value of +1 represents a 1 bit. In this case, received values in the range [-0.2, 0.2] might be marked as bit erasures to be corrected by the error correction code. If a value greater than 0.2 is received when a 0 bit was sent by the transmitter, that would be an example of a bit error that would be marked as erased.
The most basic error correction code is the parity code. It cannot correct any errors, but the receiver can use it to determine if the data received is correct or not and request the data to be resent if necessary. The parity code adds a 1 to the end of the binary data to be sent if the binary data contains an odd number of 1s and a 0 otherwise. By counting the number of 1s received, the receiver can detect any time an odd number of but errors have occurred. Even numbers of errors are undetectable by the parity code. More advanced error codes can detect and correct higher numbers of errors.
Some error codes can correct errors without asking the transmitter to resend the data. They do this by comparing the data received to valid code words. A code word is data plus redundant bits that are added by the coding scheme at the transmitter.
Hamming codes are examples of codes that can correct one bit error. The [7,4] Hamming code uses codewords of length 7 to transmit 4 bits of data. The three redundant bits are each calculated as the parity bit for three of the for data bits. Each parity bit covers a different set of three data bits.
Since any one of the data bits is covered by at least two parity bits, changing one data bit causes at least 3 bits to change in the resulting codeword. The minimum number of bits that can be changed in one codeword to produce a second valid codeword in this case is three. This is known as the Hamming distance of the code. Usually, a code is capable of correcting floor(d/2) bit errors where d is the code's Hamming distance.
Other examples of error correction codes include Reed-Solomon (used on CDs), Hadamard, Viterbi, and Turbo Codes.
Photo by Satoshi KAYA
In December 2012, I was award an MS in ECE. My focus was in telecommunications. After learning this, an acquaintance asked me to explain what telecom is. I think I might have gotten the highlights of the field across to her, but it wasn't as easy as I had hoped. This series of posts is dedicated to explaining the basics of telecom as simply as possible.
A modulation scheme is a mapping of the data you want to send to the electrical signal you transmit representing that data. Binary phase shift keying is a very basic modulation scheme in which a sine wave with a phase of 0 degrees represents a binary 1 and a sine wave with a phase of 180 degrees represents a binary 0.
If you're sending data as quickly as you can across some medium (telephone wire, coaxial cable, etc.), modulation schemes let you send more data across the line without using any extra bandwidth. There are limits on how many symbols you can transmit each second. (This limit is determined by the bandwidth of the channel you're using to transmit your data.) If you're clever you can send data more quickly without using any extra bandwidth by choosing the right modulation scheme.
Imagine you've invented a secret code with one of your friends. You and your friend live in two houses across the city from each other with a clear line of sight. You want to send a message to your friend. Your current code for sending messages is a variant on Morse code. You hold up a white sheet of paper for a 'dot', a black sheet of paper for a 'dash', and a gray sheet of paper at the end of each character.
Each second, you hold up the next sheet of paper for your message. If your message is "hello", or .... . .-.. .-.. ---, then it would take 20 seconds to send your message. Four seconds for "h", one for "e", four for "l", four for "l", three for "o", and four for the pauses between each letter (4+1+4+4+3+4=20).
This solution is slow! It would take over a minute just to ask "how are you doing?" One way to speed this up would be to show two cards per second, but that's hard to do by hand. Another way to speed up our messages is by sending two "dots" or "dashes" with each card we hold up. How? Add more colors to our collection!
Now with our advanced modulation scheme we can send messages twice as fast. It only takes 10 seconds to say "hello"!
There are a couple of metrics we can use to compare these two modulation schemes: intersymbol distance, and bits-per-symbol. These metrics allow telecom engineers to compare modulation schemes and pick the best one for a particular situation.
Intersymbol distance is a measure of how different two symbols look. In the case of the advanced modulation scheme, the difference between light-red and red is very slight. In poor lighting it would be almost impossible to differentiate these two symbols. Due to the small intersymbol distance of the advanced modulation scheme, we could expect the receiver to make many errors when receiving a message (in telecom papers, this is called the bit-error-rate). If we changed our modulation scheme to use colors that are more obviously different from each other, the intersymbol distance would be increased and the bit-error-rate would decrease.
Bits per symbol is a more straightforward metric. It's a measure of how much data can be transmitted by a single symbol. The advanced Morse code modulation scheme transmits twice the number of bits per symbol. With a constant signaling rate, or baud, the advanced modulation scheme sends messages twice as fast.
The beautiful part of modulation schemes is you can switch between them at will! If the sun has set and it's difficult to tell blue from green, switch to the basic modulation scheme. Your messages will take longer to transmit, but you can be more certain that they arrive at their destination without error!
Again, if you look at the theoretical speeds for IEEE 802.11ac, you can see how modulation schemes affect transmission rates. The modulation scheme is denoted as the "modulation type" and the number of symbols used by each modulation scheme is denoted by the number. (BPSK uses 2 symbols, QPSK uses 4, 16-QAM uses 16, 64-QAM uses 64, and 256-QAM uses 256.) Moving down the table increases the the number of symbols in the modulation scheme (and increases the coding rate, but that's a story for the next post), and an increase in throughput results (without using any extra bandwidth!)
Photo of the Arecibo Observatory by H. Schweiker/WIYN and NOAO/AURA/NSF
Filters are at the heart of telecommunications. Without filters, there would be no internet, cell phones, AM/FM radio, or TV. Filters are electrical circuits that let you pick out a specific signal you're interested in and ignore everything else.
To illustrate how filters work, let's imagine a simple scenario. Alice the human and her dog Axel both know Morse code and want to send messages to Bob the human and his dog Buddy.
Alice and Axel are impatient; they want to send their messages at the same time. Alice gets a whistle and sends her message in Morse code. Axel does the same thing, but sends his message with a dog whistle. The dog whistle makes a sound at a frequency that is too high for humans to hear; however, both dogs and humans can hear the human whistle.
Since Bob cannot hear Axel's dog whistle, he is able to focus on Alice's message and decode it: "Alice says hi!" Buddy is out of luck. He can hear both whistles at the same time, so it's hard for him to make out what Axel is trying to tell hi.
The reason Alice's message gets through but Axel's doesn't is because human ears are low-pass filters. They filter out any sound that is too high in frequency. This let's Bob ignore Axel's message and easier decode Alice's message. Buddy's ears are also low-pass filters, but with a higher cutoff frequency. Buddy's ears are not tuned to accept Axel's message and reject Alice's message. He hears both messages, and they drown each other out.
When you change the station on your radio, you're actually changing the frequencies that your radio is filtering out. Tune in to station 101.5 MHz, and your radio filters out all the signals that are higher or lower in frequency. That's why you only hear one radio station at a time even though all the radio stations in your area are broadcasting simultaneously.
Without filters, only one message could be sent at a time. Filters allow us to send lots of different messages at the same time. This means are can use filters to send messages to lots of different people (like with cell phones) or we can send lots of data very quickly to one person by sending the first half and the last half of a message at the same time on two different frequencies.
If you look at the theoretical throughput for IEEE 802.11ac (otherwise known as WiFi), you'll notice the speed goes up with the channel width. From 20 MHz up to 160 MHz, the data rate is proportional to the channel width. This is because the wider channels let WiFi devices send more messages at the same time. It's just like adding more lanes to a highway.
Filters enable you to focus on the messages that are directed to you. Bandwidth determines how many simultaneous streams of data can be sent.
Photo of the Arecibo Observatory by Ed Ivanushkin