TweetFollow Us on Twitter

Oct 00 Getting Started

Volume Number: 16 (2000)
Issue Number: 10
Column Tag: Getting Started - Networking

Networks 201 pt. 4

by John C. Welch

Layer 2: The Data Link Layer

Refresh

Before we start with this month's installment of our look at networks, an apology/error fix. In the last article, we stated that to find the wavelength of signal, you would invert the frequency, via the equation 1/freq. Unfortunately, that was incorrect. That equation actually gives you the period of the signal. To get the wavelength, you would more correctly use the equation c/freq, where c represents the speed of light in meters per second. Many thanks to Bruce Toback who was the first to notify us as to the error, and all the other readers who caught it as well. Now on to Layer 2.

Going back to the overview of the OSI model in the first article in the series, Layer 2 is the DataLink Layer, and communicates with Layers 1 and 3. The most basic description of Layer 2's function would be that it receives data and routing information from Layer 3, and assembles them into frames which are passed onto the Layer 1. It also receives serial bitstreams from the Layer 1, and assembles these into frames, which are then passed onto Layer 3. Like most networking functions, the actual duties of this Layer are far more complex, and it is those duties, and the complexity therein that we will look at in this article.

Layer 2

Base Function

The basic function of the Data Link Layer is to provide services to the Network Layer. This centers around getting data from the transmitting machine to the receiving machine intact. There are three base methods for doing this:

  1. Unacknowledged Connectionless Service
  2. Acknowledged Connectionless Service
  3. Acknowledged Connection - Oriented Service

The first method, unacknowledged connectionless service, is where the source sends independent frames to the source. This has some analogies to messages in a bottle. You write the message, pop it in the bottle, and set the bottle in the water. It either gets there, or does not. You have no way to verify that it was successfully received, or that, in the case of multiple messages, that they were received in the proper order. This method may sound very unreliable, but in fact it is used quite often. If the protocol you are using, such as TCP, has provisions for connection management, message reassembly, and acknowledgement at a higher layer, then there is no sense in having an additional level of acknowledgement and connection management in the Data Link Layer. As a result, most LANs use this type of service at the Data Link Layer level. The other reason for using this service would be in real time situations, where the time delays in setting up connections and retransmitting data would cause delays that would impede the function of the real time applications.

The second type of service is acknowledged connectionless service. This is analogous to registered mail with a delivery receipt. You have no idea how the mail got to its destination, but you know that it was received or not. There are two ways to deal with acknowledgement errors. The first is to retransmit the entire set of data. While safer in theory, this is impractical for a number of reasons. The first is that if we are talking about a large amount of data, retransmitting the entire data set can take an unacceptably long time, especially over slow links. The other is that considering the chances of a lost packet here and there get very high, especially over unreliable links, so in theory, you could run into a situation where you would never be able to stop retransmitting your data set. The second, and more common method is to only retransmit those frames that were lost. This method requires more sophisticated checking algorithms, but has a greater efficiency than retransmitting the entire message for every error. While lost frames are not a great issue with reliable media, such as fiber, when we talk about wireless networking, the chances for lost frames are much greater, and in fact, this is the type of service used in the 802.11b wireless networking standard. It is worth noting that providing acknowledgements at the Data Link Layer level is an optimization, not a requirement. That function can be, and often is handled at higher levels.

The final type of service is acknowledged connection - oriented service. In this class of service, a connection is created before any data is transmitted, each frame sent over the connection is numbered, and each frame sent is guaranteed to be received. In addition, each frame sent is guaranteed to be received only once, and in the correct order. This type of service creates, what is essentially a networked bit stream. There are three phases to data transfers in this service. The first is the establishment of a connection. As part of this, frame counters are created on both sides, to keep track of which frames have and have not been sent. The second phase is the actual transmitting of data, and the tracking of the frames. The final phase is the connection teardown, and the disposal of the frame counters and other resources used.

It worth mentioning here, that as far as the Data Link Layer is concerned, the Physical Layer doesn't really exist. Although the physical path of the data has to travel through the Physical Layer, the logical path at the Layer 2 level would show an end - to - end connection between Layer 2 on the transmitter, and Layer 2 on the receiver. There are a number of reasons for this, including error correction, data delivery mechanisms, etc. The important thing to remember is that for Layer 2, Layer 1 doesn't really exist

Frames

As we mentioned earlier, the Data Link Layer deals with frames. This is the way it both provides service to the Network Layer, and uses the services provided by the Physical Layer. Remember, all the Physical Layer cares about is getting bits from the Network Layer, and shoving them out onto the line to their destination. (Actually, all the Physical Layer cares about is shoving bits onto and receiving bits from the wire. It has nothing to do with addressing, and in the case of things like Ethernet, will actually look at all bits on the wire, relying on the higher layers to check things like destination addressing.)

In any case, the structure used by the Data Link Layer is the frame. All data from the Network Layer is encapsulated into a frame, and sent on to the Physical Layer. Conversely, all bits from the Physical Layer is packed into frames, and sent on to the Network Layer. Although the specific sizes and contents of frames are determined by the hardware protocol used, such as Ethernet and Token Ring, all frames have certain structural commonalities.

All frames have a Start Of Frame delimiter of some kind. This is some kind of structure that says "This is where the frame starts". They also all have some kind of End Of Frame delimiter, or Frame Check Sequence, that says "This is where the frame ends." Since we are talking about the start and end of frame delimiters, this is a good time to touch on exactly how frames are pulled out of bits on a wire. There are about five ways to do this. The first, timing, is not used, and was never really used, as there is almost no way for any network to guarantee the timing between frames, or how long a frame takes to get from point a to point b. This leaves us with four other methods of marking frames:

  1. Character Count
  2. Starting and ending characters, with character stuffing
  3. Starting and ending flags, with bit stuffing.
  4. Physical Layer coding violations

Method one, character count, uses a header to specify the number of characters in a frame. When the Data Link Layer sees this header, it knows the end of the frame is exactly X number of characters follow the character count header, and therefore, where the frame ends. The problem with this is that there is no real way to deal with the character count getting scrambled. If the transmitter sends a count of 25 characters, and the data gets scrambled, so that the receiver sees a count of 20 characters, then not only is that frame garbled, but all following frames as well. Once this synchronization is lost, then even retransmission doesn't work, because there is no way to tell how many characters to ignore so as to skip the bad frame. Not surprisingly, the character count method is rarely used these days.

The second method uses specific characters to specify the beginning and the end of the frame structure. The characters used are ASCII character pairs, with DLE STX used for the frame start, and DLE ETX used for the frame finish. (DLE stands for Data Link Escape, STX stands for Start of TeXt, and ETX stands for End of TeXt.) By using specific character sets as frame delimiters, and using those specific character pairs solely as frame delimiters, the character count synchronization issues are avoided. This works well for text data, but if we are dealing with binary or numerical data, then it is possible for those characters to occur in random places within the frame. The way to avoid this problem is to have the Data Link Layer insert, or 'stuff', and extra DLE character in front of each occurrence of a delimiter pair in the wrong place. This way, the Data Link Layer on the receiving end knows that a double DLE pair is not a frame delimiter, and to remove one of the DLE characters from the frame data before passing the frame up to the Network Layer. Although this character stuffing works reasonably well, this entire method is too closely tied to eight - bit ASCII data to be useful universally.

The third method, uses bit pattern flags instead of character pairs as frame delimiters. This flag pattern is generally 01111110. To avoid synchronization errors caused by that pattern occurring naturally, every time there is a series of five consecutive 1 bits, a 0 bit is inserted immediately after the fifth 1. This way, the flag pattern is never duplicated in the frame data, avoiding synchronization errors. As well, this method is not tied to any particular encoding method, so it can be used more universally.

The final method is used in LAN types where the data is encoded using bit pairs, i.e. a 01 pair is used to represent a binary 0, and a 10 is used to represent a binary 1. The transition from high to low, or vice-versa is what determines the data type. This makes it easy to take advantage of this to set frame boundaries. Since all data is a transition of some type, the pairs 00 and/or 11 can be used to set frame boundaries, as they will never occur anywhere else. Obviously, this method can only be used on networks with the proper type of bit encoding. In general, a combination of methods are used to delimit frames, so as to lessen the chance of error.

In addition to the delimiters, all frames have source and destination address pairs. These are the hardware identifiers that name both the source and destination machines of the frame. The address pairs are placed at the beginning of the frame, so that they can be processed faster by a potential destination machine.

The final common part of frame structure is the data field. This is the area that carries the actual data for the frame. The length of this is dependent on the hardware protocol used, such as Ethernet, Token Ring, FDDI, ATM, etc.

Error Control

Since we are sending frames back and forth, we need to make sure that what we send is what we receive, which leads us to another job of the Data Link Layer, error control. There are a number of areas in which the Data Link Layer provides this service. The first is in the area of frame delivery. When a frame is transmitted, it is good to know that the frame arrived at all, and if so, that it arrived intact. There are two ways that the Data Link Layer handles this. When a frame is sent, and received successfully, the receiver sends back a special control frame whose purpose is to acknowledge the successful reception of the frame. This is commonly called an ACK frame, or just ACK. If the frame was not received successfully, then a negative acknowledgement, or NACK frame is sent. (it is useful to point out here that the ACK/NACK do not indicate the data in the frame is intact, but rather that the frame itself was received correctly. The Data Link Layer does provide error detection for the data in the frame, we will cover that later. It is also good to note that the error correction capabilities of the Data Link Layer are optional in a protocol, and can be not used if this service is provided higher up.) This is useful when the frame is received, but what if the frame disappears completely? To handle this, timers are used. As each frame is sent, a timer is started. If the timer reaches its end before the ACK or NACK is received, then the frame is retransmitted. To avoid the receiver passing the same frame up to the Network Layer multiple times, a sequence number is assigned to each frame, so that the receiver can distinguish retransmissions from original frames.

In addition to the basic frame structure, the Data Link Layer can actually check to ensure that the physical bits in the frame at the receiving end are the exact same bits that were transmitted. There are two general ways to do this. The first is to send enough information along with the packet so that the receiver can figure out what the garbled character must have been. This procedure uses error-correcting codes. The second is to allow the receiver to deduce that an error occurred, but not what error or where, and request a retransmission of the frame. This method uses error-detection codes.

In error correction, the data is analyzed, and any errors are able to be fixed. The methods we will cover here are based upon the work of Hamming. In comparing two binary words, such as 10001001 and 10110001, it is relatively easy to determine which bits differ, by Exclusive ORing, or XOR'ing the words. (When two digits are compared via XOR, the result shows if the digits were alike or not. So 1 XOR 1 would give you a 0 as a result, since 1 and 1 are the same, whereas a 1 XOR 0 would give you a 1 as a result, since 1 and 0 are different.) In the case of our words, there are 3 bits that differ, so they are said to have a Hamming distance of 3. To help detect and fix errors, check bits are used. These are bits within the word that, instead of containing data, are used to protect the integrity of the data. This means that if you are sending eight - bit data words, that instead of their being 28 possible combinations of data, there will be fewer, as the check bits will take up space within that eight - bit word.

As an example, we take an eight bit word, and number each bit starting at the left with bit 1. Each bit that is a power of 2, (1,2,4,8), becomes a check bit, and the remaining bits are used to hold the data. The check bit is used to force the parity of a collection of bits, including itself to be even or odd. A given check bit can be used in multiple calculations. For example, the bit at position 7 is checked by bits 1, 2, and 4, (1+2+4 = 7). So when the word arrives at the receiver, the parity of each check bit is examined, (usually, 0 indicates even and 1 indicates odd.) If the parity of each check bit is correct, then the parity counter is not incremented. If the parity is incorrect, then the counter is incremented by the position of that bit. So, if bits 1,2,and 4 are incorrect, then the counter would read 7, and that would be the bit that is inverted. This allows the receiver to set bit 7 to its proper value. In general, Hamming codes can only detect single errors. However, by arranging the words as an array, and sending the words out column-wise, instead of row-wise, the Hamming codes can then be used to correct any error in that array.

With error detection, the data is analyzed for correctness, but if an error is found, then the frame is retransmitted, rather than being fixed. One of the most common ways of doing this is via a Cyclic Redundancy Code, or CRC. In this case, the word is treated as a polynomial of k terms, with coefficients of 1 or 0 only. As an example, 11100011 would be an eight-term polynomial with coefficients of 1,1,1,0,0,0,1,1, being the equivalent of x7+x6+x5+x1+x0. Within CRC, polynomial division is used to create the CRC code. The first part is to agree on a generator polynomial, G(x) in advance. Both the high and low order bits of G(x) must be 1. To compute the checksum of a frame with m bits, corresponding to a polynomial M(x), the frame must be longer than G(x). The idea here is to append a checksum onto the end of the frame in such a way that the polynomial represented by the checksummed frame is evenly divisible by G(x). If there is a remainder after this division, the data is garbled somehow. The algorithm for this follows.

  1. Let r be the degree of G(x). Append r 0's to the end of the frame so that it contains m + r bits, and corresponds to the polynomial XrM(x).
       Frame:       1101011011
       G(x):      10011
       XrM(x):      11010110110000
    
  2. Divide G(x) into XrM(x) using Modulo 2 division
  3. If there is a remainder, subtract that from XrM(x) using Modulo 2 subtraction. The result is the checksummed frame to be transmitted, or T(x). In our example, we would have a remainder of 1110. Subtracting 1110 from 11010110110000 via Modulo 2 subtraction, (which is simply subtraction via XOR) gives us a T(x) of 11010110111110.

The ability of the CRC method to detect errors is extremely high. As an example, if 16 - bit CRC is used, then all single or double bit errors are caught, all odd-numbered errors, all burst errors of length 16 or less, 99.997% of 17 - bit error bursts, and 99.998% of all 18 - bit or longer bursts.

Regardless of which method is used, both error correction and error detection both work well to help insure reliable data delivery across networks.

Media Access Control

The final job of the Data Link Layer is to handle access to the media itself. In most LANs, there are four basic methods for handling media access control:

  1. Contention
  2. Token Passing
  3. Demand-priority
  4. Switched

The first method, contention is the most widely used for now. This is the method used by such LAN types as Ethernet, Fast Ethernet, and 802.11 wireless networks. While widely used, this is also a fairly primitive form of media access control. Each time any device on a contention network needs to transmit data, it checks the wire to see if any other station is transmitting. If not, then the device can transmit, otherwise, the device must wait for the media to clear up. This type of network also requires that all devices transmit and receive on the same frequency band. The media can only support a single signal at a time, and this signal takes up the entire band. In other words, this is a baseband transmission network. This creates two important implications, the first being that only one device can transmit at a time, and that a device can transmit or receive, but not both at once. This is called a half-duplex operation.

In a contention based network, there are a number of things that are done to avoid collisions, which occur when two stations attempt to transmit at the same time. These are based on frame size and timing. As an example, with the IEEE 802.3 Ethernet, frame sizes are specified to be between 64 and 1524 octets in size. If a frame is going to be smaller than 64 octets, it is padded with 0s so that it is 64 octets in size. The reason for this ties into the timing part of collision avoidance, timing. If the minimum and maximum frame sizes are known, consistent quantities, then the amount of time it should take a frame to reach its destination can be accurately calculated. This time is the time it would take for each frame to propagate across the entire network. In a contention-based, baseband transmission network, each frame must be sent over the entire LAN to ensure than all recipients can receive it. In any case, the frame can be destroyed by a collision anywhere on the network. As the physical size, and number of devices on the network grow, the probability of collisions increase. One of the ways that modern networks avoid collisions is by using an inter-frame gap. This is a specified amount of dead air between frames. In a modern Ethernet network, the inter-frame gap is 96 bits long. So when a device transmits, not only must other devices wait for the frame to be transmitted, but also for the inter-frame gap to be transmitted. This gives the transmitting device time to either send another frame, or relinquish control of the media, without contending with all the other devices on the LAN.

Another method is through the Binary Exponential Backoff Algorithm. This is used after a collision occurs. After a collision, time is divided into discrete slots, which are this size of the time it takes a frame to travel round trip on the longest path allowed by the network type. In the case of 802.3 Ethernet, this is 51.2µsec. After the first collision, the stations that were affected wait either 0, or 1 of these time slots before trying again. If they collide again, they pick 0 - 3 time slots, wait, and try again. If a collision occurs again, then 0-7 time slots are used. In each case, this is a binary value, using 21 slots for the first collision, 22 for the second, 23 for the third and so on, hence the name of the algorithm. This occurs up to a maximum of 210 slots. Once that number of slots has been reached, then the stations wait a fixed amount of time. If 16 consecutive collisions occur at this point, the controller gives up, and reports a failure back to the computer. Further recovery is then up to higher layers.

Although collisions have a bad name, and rightfully so, they can be successfully managed through proper use of network design and devices, as well as using different types of network protocols in the areas where they are strongest.

The next type of media access control is via a token passing mechanism, used most often in Token Ring and FDDI networks. A token is a special frame that moves from device to device on the ring, (token - based networks use some sort of ring shape for their function.), and only circulates when the network is idle. The frame is only a few octets in length, and contains a special bit pattern. If a device needs to transmit data, it 'captures' the token, and converts that bit pattern into a Start Of Frame, (SOF) delimiter, that informs downstream devices that this is now a data - bearing frame, and that they have to wait until they get a token before they can transmit. The token is the sole way to access the network in this type of media access control. If a device receives the token, it has up to the default value in milliseconds for that network to convert the token into a data frame. If it does not have any data, it must release the token to the next device in the ring. If it does have data, then it converts the frame to a data frame, and begins transmitting. The recipient of the data then modifies the frame to show an ACK or NACK. Once the data transmittal is complete, the originating station converts the frame back to a token, and sends it back onto the network. The advantages to a token - passing network are highest in situations where a predictable delay in data transmittance is needed.

The obvious problem with a token-passing network is that of how to deal with the situation that arises when a transmitting station goes down, or drops off the ring. Without a method of dealing with this, the entire network could stop functioning, as there would be no way to convert the data frame back to a token frame. To handle this, the idea of a monitor station was provided. The monitor station monitors the ring, and ensures that the ring does not exceed a given time without the existence of a token. In the case we described above, the monitor, (which is usually the first station on the ring, or if that station goes down, a monitor contention algorithm decides the new monitor station.) grabs the data frame, removes it from the ring, or drains it, and issues a new token. By using monitor stations, we avoid having an orphan frame endlessly circulating, and preventing any other station from transmitting.

The third media access control method is that of Demand Priority Access Method, or DPAM. DPAM is a round - robin arbitration method wherein a central repeater, or hub, polls each port connected to it. This is done in port order, and identifies the ports with transmission requests. Once the ports that need to transmit are identified, then the priority of those ports is established. Normally, and idle port transmits an idle signal, indicating that it is not transmitting data. If a given port is cleared to transmit data, the hub tells it to cease transmission of the idle signal. When the port hears its own 'silence', it begins to transmit data. Once data transmission begins, the hub alerts all ports connected to it that they may be receiving data. The hub then analyzes the destination address in the frame, compares it to its own internal link configuration table, and routes the frame to the port that connects to the specified device.

In a DPAM network, the priority is controlled by the central, or root hub. The overall priority for the network is also called the priority domain. This domain can include up to three levels of cascaded hubs. The central hub sends all traffic to the lower level hubs, which handle polling their own active ports once transmission has ceased. By using a priority mechanism, the problems of contention are avoided. No station can transmit twice in a row if other stations with equal, or higher priority requests are pending. If a station is transmitting, a station with a higher priority cannot interrupt that transmission. A higher priority request can however, preempt a lower level one. Finally, any lower level request that has waited for longer than 250ms is automatically raised to higher priority status. Although more reliable than contention networks, and cheaper than token - passing networks, DPAM was never a marketplace contender, and is virtually nonexistent in the modern LAN.

The final media access control method, switching, isn't as clearly defined as the other three, but is being used more and more in modern LANs to increase performance and efficiency. In essence, a switch decreases the network size for a given transmission to 3 devices: the switch, the transmitter, and the receiver. This increases performance by giving the data transmission the full bandwidth of the network for that transmission by creating a virtual network just for that transmission. It increases efficiency by decreasing the number of collisions on a contention network, and by allowing the use of the full bandwidth of the network. Switching has also allowed the use of the VLAN, or virtual LAN, where traffic can be segregated by protocol, port number, hardware address, etc. This also decreases the overall amount of traffic on the LAN, which decreases the error rate on the LAN. Switching can also be used by token-passing networks as well.

Conclusion

Well, we covered a lot for one layer, and really barely scratched the surface of Layer 2. There are whole books available that deal with this Layer alone, as it is one of the most complex layers in the OSI model, and one of the most critical. Hopefully, you now have a better idea of how this layer works, and why it is as important as it is. As usual, you are encouraged to read up on your own, the sources I list in my bibliography are a good start. Next time, Layer 3, the Network Layer!

Bibliography and References

  • Tannenbaum, Andrew S. Computer Networks. Third Edition Prentice Hall, 1996.
  • Sportack, Mark. Networking Essentials Unleashed. SAMS Publishing, 1998.

John Welch <jwelch@aer.com> is the Mac and PC Administrator for AER Inc., a weather and atmospheric science company in Cambridge, Mass. He has over fifteen years of experience at making computers work. His specialties are figuring out ways to make the Mac do what nobody thinks it can, and showing that the Mac is the superior administrative platform.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Summon your guild and prepare for war in...
Netmarble is making some pretty big moves with their latest update for Seven Knights Idle Adventure, with a bunch of interesting additions. Two new heroes enter the battle, there are events and bosses abound, and perhaps most interesting, a huge... | Read more »
Make the passage of time your plaything...
While some of us are still waiting for a chance to get our hands on Ash Prime - yes, don’t remind me I could currently buy him this month I’m barely hanging on - Digital Extremes has announced its next anticipated Prime Form for Warframe. Starting... | Read more »
If you can find it and fit through the d...
The holy trinity of amazing company names have come together, to release their equally amazing and adorable mobile game, Hamster Inn. Published by HyperBeard Games, and co-developed by Mum Not Proud and Little Sasquatch Studios, it's time to... | Read more »
Amikin Survival opens for pre-orders on...
Join me on the wonderful trip down the inspiration rabbit hole; much as Palworld seemingly “borrowed” many aspects from the hit Pokemon franchise, it is time for the heavily armed animal survival to also spawn some illegitimate children as Helio... | Read more »
PUBG Mobile teams up with global phenome...
Since launching in 2019, SpyxFamily has exploded to damn near catastrophic popularity, so it was only a matter of time before a mobile game snapped up a collaboration. Enter PUBG Mobile. Until May 12th, players will be able to collect a host of... | Read more »
Embark into the frozen tundra of certain...
Chucklefish, developers of hit action-adventure sandbox game Starbound and owner of one of the cutest logos in gaming, has released their roguelike deck-builder Wildfrost. Created alongside developers Gaziter and Deadpan Games, Wildfrost will... | Read more »
MoreFun Studios has announced Season 4,...
Tension has escalated in the ever-volatile world of Arena Breakout, as your old pal Randall Fisher and bosses Fred and Perrero continue to lob insults and explosives at each other, bringing us to a new phase of warfare. Season 4, Into The Fog of... | Read more »
Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links below... | Read more »
Marvel Future Fight celebrates nine year...
Announced alongside an advertising image I can only assume was aimed squarely at myself with the prominent Deadpool and Odin featured on it, Netmarble has revealed their celebrations for the 9th anniversary of Marvel Future Fight. The Countdown... | Read more »
HoYoFair 2024 prepares to showcase over...
To say Genshin Impact took the world by storm when it was released would be an understatement. However, I think the most surprising part of the launch was just how much further it went than gaming. There have been concerts, art shows, massive... | Read more »

Price Scanner via MacPrices.net

You can save $300-$480 on a 14-inch M3 Pro/Ma...
Apple has 14″ M3 Pro and M3 Max MacBook Pros in stock today and available, Certified Refurbished, starting at $1699 and ranging up to $480 off MSRP. Each model features a new outer case, shipping is... Read more
24-inch M1 iMacs available at Apple starting...
Apple has clearance M1 iMacs available in their Certified Refurbished store starting at $1049 and ranging up to $300 off original MSRP. Each iMac is in like-new condition and comes with Apple’s... Read more
Walmart continues to offer $699 13-inch M1 Ma...
Walmart continues to offer new Apple 13″ M1 MacBook Airs (8GB RAM, 256GB SSD) online for $699, $300 off original MSRP, in Space Gray, Silver, and Gold colors. These are new MacBook for sale by... Read more
B&H has 13-inch M2 MacBook Airs with 16GB...
B&H Photo has 13″ MacBook Airs with M2 CPUs, 16GB of memory, and 256GB of storage in stock and on sale for $1099, $100 off Apple’s MSRP for this configuration. Free 1-2 day delivery is available... Read more
14-inch M3 MacBook Pro with 16GB of RAM avail...
Apple has the 14″ M3 MacBook Pro with 16GB of RAM and 1TB of storage, Certified Refurbished, available for $300 off MSRP. Each MacBook Pro features a new outer case, shipping is free, and an Apple 1-... Read more
Apple M2 Mac minis on sale for up to $150 off...
Amazon has Apple’s M2-powered Mac minis in stock and on sale for $100-$150 off MSRP, each including free delivery: – Mac mini M2/256GB SSD: $499, save $100 – Mac mini M2/512GB SSD: $699, save $100 –... Read more
Amazon is offering a $200 discount on 14-inch...
Amazon has 14-inch M3 MacBook Pros in stock and on sale for $200 off MSRP. Shipping is free. Note that Amazon’s stock tends to come and go: – 14″ M3 MacBook Pro (8GB RAM/512GB SSD): $1399.99, $200... Read more
Sunday Sale: 13-inch M3 MacBook Air for $999,...
Several Apple retailers have the new 13″ MacBook Air with an M3 CPU in stock and on sale today for only $999 in Midnight. These are the lowest prices currently available for new 13″ M3 MacBook Airs... Read more
Multiple Apple retailers are offering 13-inch...
Several Apple retailers have 13″ MacBook Airs with M2 CPUs in stock and on sale this weekend starting at only $849 in Space Gray, Silver, Starlight, and Midnight colors. These are the lowest prices... Read more
Roundup of Verizon’s April Apple iPhone Promo...
Verizon is offering a number of iPhone deals for the month of April. Switch, and open a new of service, and you can qualify for a free iPhone 15 or heavy monthly discounts on other models: – 128GB... Read more

Jobs Board

Relationship Banker - *Apple* Valley Financ...
Relationship Banker - Apple Valley Financial Center APPLE VALLEY, Minnesota **Job Description:** At Bank of America, we are guided by a common purpose to help Read more
IN6728 Optometrist- *Apple* Valley, CA- Tar...
Date: Apr 9, 2024 Brand: Target Optical Location: Apple Valley, CA, US, 92308 **Requisition ID:** 824398 At Target Optical, we help people see and look great - and Read more
Medical Assistant - Orthopedics *Apple* Hil...
Medical Assistant - Orthopedics Apple Hill York Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Now Read more
*Apple* Systems Administrator - JAMF - Activ...
…**Public Trust/Other Required:** None **Job Family:** Systems Administration **Skills:** Apple Platforms,Computer Servers,Jamf Pro **Experience:** 3 + years of Read more
Liquor Stock Clerk - S. *Apple* St. - Idaho...
Liquor Stock Clerk - S. Apple St. Boise Posting Begin Date: 2023/10/10 Posting End Date: 2024/10/14 Category: Retail Sub Category: Customer Service Work Type: Part Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.