Session B12 161 Disclaimer—



Download 55.78 Kb.
Date20.06.2017
Size55.78 Kb.
#21261

Session B12

161


Disclaimer—This paper partially fulfills a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering. This paper is a student, not a professional, paper. This paper is based on publicly available information and may not provide complete analyses of all relevant data. If this paper is used for any purpose other than these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
FLASH MEMORY AS THE FUTURE FOR DATA STORAGE
Mason Kline, mpk50@pitt.edu, Mena 1:00PM, Davis Kuhn, dfk15@pitt.edu, Mena 1:00PM



Abstract—In the last decade, flash memory and its underlying NAND storage method (named after being the opposite of an “and” logic gate) have made and continue to make huge strides in data storage solutions with their lightning fast speeds.

Flash memory can drastically improve computing speeds, allowing a computer to run up to one hundred times faster than the current standard, mechanical memory. These blazing speeds are only achievable since flash memory has no moving parts. They are composed of Floating Gate Transistors, organized into arrays of memory cells. When an electron is shot into a memory cell, it is read as either “1” or “0,” which is translated into data storage.

However, flash memory has its place, just as mechanical memory does. In mass storage systems, the cost of the devices has a much higher priority than the access speed of the data, which is when mechanical memory prevails. In personal computing systems, the faster reading and writing speeds of flash memory is crucial because different applications are always being prompted, requiring data to be accessed. From a quality of life perspective, flash memory improves the sustainability of computing as a whole. When it takes less time to read and write files to a storage unit, the user can be more productive in a work environment. Additionally, the longer life and improved reliability of flash memory-based devices is environmentally conscious since it preserves the resources dedicated to building these devices. However, flash memory is not the standard for data storage currently, but as they become more available in consumer markets, the price will reach the point where it can be.
Key Words—Data storage, Flash memory, Floating gate transistor, Hard Disk Drive, NAND, Solid State Drive.
THE BIG THING IN COMPUTING
Computers have been revolutionizing the way society functions ever since they were first invented. The latest and greatest improvement to computing speeds over the past decade has been flash memory Specifically, floating gate transistors (FGT) used inside of NAND (named after being the opposite of an “and” logic gate) memory cells have optimized the efficiency of computing speeds. Our focus will be on NAND-based flash memory, which is comprised of long series of FGTs reading either “1” or “0,” thus providing means of data storage. From the space station to the smartphone, many mainstays of the modern world have been invented or improved upon due to the inclusion of flash memory. Flash memory’s lack of moving parts allows it to be packed very tightly, which modern devices utilize to increase performance. Through the implementation of NAND memory cells containing FGTs, computing speeds have reached the fastest they have ever been.
HOW IT ALL STARTED
As with most technologies, data storage did not start out on at cutting edge speed and capacity it has today. The first data storage solutions were massive enough to fill entire rooms. They also only contained the storage capacity that, today, would be filled by a few text files. However, as time has progressed, so have data storage solutions. In modern day, impressive amounts of data can be stored in handheld devices with speeds that blow away technological predecessors.
Magnetic Memory
The first steps in data storage were in the form of magnetic memory, which gets its name from utilizing magnets and mechanical parts to access the data. The earliest viable data storage device was the magnetic tape drive. According to the Computer History Museum, one of the earliest forms of the magnetic tape drive was the “Univac Uniservo tape drive,” introduced in 1951, which worked by reading thick bands of rotating magnetic tape. Each Univac tape weighed nearly three pounds and had a storage capacity of 1,440,000 single digit numbers, which translates to a bit larger than a single kilobyte [1]. To put this in perspective, a single Microsoft Word document takes up 22 kilobytes of memory today. Although it is a small amount of data to store, the magnetic tape drive was revolutionary since it was the first way to effectively store data.

The second major step in data storage solutions was the floppy disk drive. According to the Computer History Museum, the first floppy drive available for consumers was the IBM “Minnow” floppy disk drive in 1968. It was a read-only drive with a capacity of 80 kilobytes [1]. At this point, data storage solutions becoming sensible to store reasonable amounts of information.

The third revolutionary step in data storage is a precursor to a product that is still in use today. According to the Computer History Museum, the first Hard Disk Drive (HDD) was invented in 1980: “The disk held 5 megabytes of data, five times as much as a standard floppy disk” [1]. Even though this specific HDD was only able to hold five megabytes, it is the predecessor of HDDs that are still used today in computing systems.

The Compact Disk (CD) is the next major step in data storage. According to the Computer History Museum, the CD was developed in 1983 with the purpose of music distribution. Later, in 1984, the CD-ROM was released, improving on the CD. A single CD-ROM could store an entire encyclopedia with over half of its storage space to spare [1]. At this point, data storage is easily compatible with computers to both read and write information.

There have been countless innovations to magnetic memory since the CD-ROM. However, magnetic memory has been approaching a point where it can no longer increase computing speeds. Since it involves the use of moving parts, a magnetic memory drive is only as fast as its slowest moving component, which is where flash memory comes into play.
Early Flash
Flash memory is, in a broad sense, the most recent and major improvement to data storage. The main idea behind flash memory is that it has no moving parts, which allows for considerably faster reading and writing speeds than magnetic memory. Flash memory is a recent innovation, but it has already gained the popularity to compete against the tried-and-true HDD.

One of the first flash memory based innovations was the SD (Secure Digital) card. According to the Computer History Museum, the SD card was first introduced in 1994 with a tiny size and a capacity of 64 megabytes [1]. This allowed the cards to quickly become popular with cameras. Their small size and reliable construction fit perfect inside of cameras, giving them a greatly improved capacity to store photos and videos.

A second major innovation that directly resulted from flash memory is the USB flash drive. According to the Computer History Museum, after being introduced to the consumer market in 2000, they quickly became the preferred method of transferring files between computers [1]. Since they are not prone to scratching (like a CD) or corruption from magnets (like a floppy disk), USB flash drives are a very reliable method of storing information both in short and long term settings.

The aforementioned flash memory based innovations are both beneficial in various aspects of computing, but none is as all-encompassing as the Solid State Drive (SSD). These are the flash memory based counterpart to the HDD. Since there are no moving parts in an SSD, the computing speeds are considerably faster than HDD. To understand how flash memory improves reading and writing speeds, it is important to understand how the components that make it work.


THE FLOATING GATE TRANSISTOR
https://lh5.googleusercontent.com/def6jffrhehvfj-n35tdhnauexrxdtmqxzb6ylibe6fqzwelorvzieuxdhwtsdp4trl3sraqbs0miszdsmitgssi-flfrjrkqbi12qbf00ou5xzzdc38-lq-g2atgwkzjhxnfg8x
FIGURE 2 [2]

A visual representation of a normal transistor.
https://lh5.googleusercontent.com/lnjzife34vrwuomsmwoqz36civusmxgw3kwcnoee-qoqrqvz6kjzzrfwcvro3by7t4ic-9ahbgzvyre9f688lo7ol_avrrl9zhqkf6e9dusoa1viqlcfafz_aesvfxn2yajijnut
FIGURE [3]

A visual representation of a Floating Gate Transistor
The Technology
The Floating Gate Transistor (FGT) is the most basic and integral part of modern flash memory. Without the development of the FGT, neither NAND nor NOR flash memory would exist. The NAND and NOR memory types are named after the types of logic gates that they resemble, NAND resembles a “Not AND” gate and NOR resembles a “Not OR” gate. The FGT bears several differences from a normal transistor; while a normal transistor is designed to modulate an electric signal that passes through it by using current passed through its source to increase the current that flows through its emitter, the FGT is designed to both modulate an electric signal like a transistor and contains the eponymous floating gate. This floating gate is added by taking a transistor’s original gate and completely insulating it from the rest of the transistor with a dielectric material. After the original gate is insulated, a second gate is added on top. This new addition is not insulated and will function normally, allowing the FGT to operate as a normal transistor, but now with the added ability to store a charge within its floating gate. By storing charge in the floating gate, the FGT can be used as a storage system for binary code by having a device read the voltage of what is stored within the floating gate. Then, having that device turn that reading into either a “0” or “1.”

To store these electrons, the floating gate must be constructed of a dielectric material that can both have a voltage read from it and insulate the stored electrons from any interference by which the FGT might be affected. This interference can come from a variety of sources, including external magnetic, electric fields, or other FGTs located within the same device. Since the dielectric material needs to be able to contain electrons effectively, this interference has the consequence of making it very difficult to insert an electron into the floating gate in the first place. There are two methods by which the electron transfer is done.



The older, less used method of electron transfer is called Hot Carrier Injection. It creates what are called hot electrons that have a kinetic energy large enough to penetrate the dielectric material shielding the floating gate. According to the data collected by E. Takeda and N. Suzuki, of the Central Research Laboratory in Tokyo, Japan, this method has several drawbacks that have led to its discontinuation for general use. First, this method can insert electrons, but is not able to extract them from the floating gate. This means that memory developed with the Hot Carrier Injection method is only able to be programmed once [4]. Essentially, for later uses of the device, it must be completely reset for the floating gate to be reprogrammed. According to Dr. Thomas Shwarz, professor of computer engineering at Santa Clara University, this is done by shining UV light onto the chipset to energize the electrons to the point where they exit the floating gate [5]. Second, the Hot Carrier Injection method causes the dielectric material to experience a lot more wear, given the fact that, to be programmed, an electron is essentially smashed into the dielectric shell and forced inside [4]. The more prevalent of the two methods in use is called Fowler-Nordheim tunneling [6]. Fowler-Nordheim tunneling is performed by creating an extremely large potential difference between the inside and outside of the dielectric shell, which enables an electron to gather enough energy to pass through what is normally a solid barrier. The same process in reverse is used to extract the electrons from the floating gate.
Later Developments
The originally developed FGT is referred to as a Single Level Cell (SLC). In a SLC, only one electron can be stored. Using singular electrons was found to be very inefficient when designing memory cells based off the FGTs. As a result, researchers and engineers developed what are called Multi Level Cells (MLC), Triple Level Cells (TLC), and Quadruple Level Cells (QLC), which can all be utilized to store either two, three, or four electrons respectively. By enabling the FGT to store more than one electron at a time, the amount of binary code that can be represented by a single cell is raised to another power of two for each added electron (as shown in figure 3).
https://lh3.googleusercontent.com/fal9a9hbs3rhn6bz6nf-3-a1sh-hutp-l66je2kz4e1yioa2x5cguyrji-im6a9tkd77tlstyrq1sy32kw9tvym9yqk6nv86eq9bss9q-c3wdmvytorbcbuqegdpobshlfb7e9h9
FIGURE 3 [7]

A visual representation of the bit combinations that can be stored in SLC, MLC, TLC, and QLC
In Microsoft Certified Systems Engineer, Scott Lowe’s article, it is shown that making use of this property also has a significant drawback. By adding more electrons, the read and write speeds of the device that is reading memory will suffer due to the increased difficulty of trying to fit the voltage represented in the floating gate to a specific binary sequence [8].

With the FGT’s ability to store a charge efficiently and with little loss for large amounts of time, it was quickly utilized by technology developers in non-volatile memory. There were two main types of memory that were developed using the properties of the FGT, Electrically Erasable Programmable Read-Only Memory (EEPROM) and Erasable Programmable Read-Only Memory (EPROM). These two types of memory are different mostly in the fact that they make use of differing methods of electron transfer. EEPROM makes use of Fowler-Nordheim Tunneling, while EPROM makes use of Hot Carrier Injection. In addition, EPROM must be completely reset to be re-written, since Hot Carrier Injection does not provide a way to extract electrons. These two types of memory are outdated, as flash memory has largely supplanted both due to its faster reading/writing speeds and more efficient handling of the data.


INCORPORATING THE TECHNOLOGY
According to researchers at the Fraunhofer Institute of Integrated Systems and Device Technology, flash memory is the newest and most innovative implementation of the FGT [9]. Flash memory is in use in millions of devices around the world, and comes in two distinct versions: NOR and NAND. NOR flash gets its name from the observation that the FGTs are arranged so that they act similarly to, and resemble, the NOR logic gate, which takes two inputs and creates one output. NOR memory is very akin to the older EPROM memory as both use Hot Carrier Injection to program their floating gates. Although NOR flash memory does not need to be completely erased to be reprogrammed, the FGTs contained within the flash must be erased in large, connected blocks. NAND flash is the other form of flash memory, and is the type of flash memory used in most modern technology. NAND is formed from FGTs that are in the form of the NAND logic gate, meaning that the FGTs are connected in series instead of connected in parallel blocks like in NOR. This allows for more exact operations to be performed on each FGT, including reading, writing, and erasing the data stored inside of the floating gate. NAND utilizes Fowler-Nordheim tunneling for both writing and erasing data, meaning that each floating gate can be accessed individually [9].
https://lh6.googleusercontent.com/gvqg7iamxjqy0h6rn7_t_yz2hibxgreznl9ymhftrsw-4egh1uh6cpsjcjknlq2ybfo5efdtbuhlhqk3bjwihahhpqb-o-9c45wzvrzrnmbku1k9e0xpscd6hw6da1e50l6dfhfs
FIGURE 4 [10]

A diagram of NAND (on the right) and NOR (on the left) flash memory.
NAND is much more prevalent in industry due to its lower cost per memory unit and higher endurance with respect to read and write cycles. The product in which NAND memory is most heavily utilized is the Solid State Drive, which is a form of secondary memory for computers and data storage facilities. In NAND memory, the FGTs are placed into a long series, all connected with the same bit line. This allows the bit line to be pulled above or below the write voltage and for each specific transistor as opposed to NOR which has parallel blocks that only allow the bit line to be pulled for each block of FGTs. This allows for increased memory density compared to NOR flash, which has each of its FGTs individually connected, thus taking up more space [7].

VNAND is a relatively new development in flash memory that has caused memory density and total storage capacity to skyrocket. VNAND stands for Vertical NAND, and not surprisingly, it has added a third dimension to the previous two dimensional NAND memory. This memory stacking allows for NAND cells to be laid on top of one another without having to reconnect an entirely new bit line. This is accomplished using charge trap flash, which works similarly to a FGT. The one exception between the two is that the charge trap is constructed mostly of an insulator surrounded by an oxide, whereas the floating gate is constructed of a conductor surrounded by oxide. According to a paper by Woo Young Choi, Hyug Su Kwon, Yong Jun Kim and various other members of the faculty at Sogang University, this change of material is said to give the charge trap extra insurance against defects, since any defect and electron leaking will only affect that specific cell since there is an insulator present. This extra insurance is taken advantage of when stacking the NAND because it is an imprecise process, and often leads to more defects and problems than the regular horizontal NAND [11].


WORKING OUT THE KINKS
Data Seepage
As previously mentioned the floating gate of a FGT is a conducting shell designed to encapsulate electrons and store them as binary. To accommodate the Fowler-Nordheim tunneling used to insert and extract electrons, the dielectric material utilized for the floating gate is only a few nanometers thick. Using this method of storing electrons in NAND memory causes the floating gate’s dielectric material to deteriorate slightly every time an electron is inserted or extracted from the gate. Over time this small amount of deterioration adds up, and eventually the floating gate is no longer able to completely insulate its contained electrons from outside interference. At even more advanced stages of deterioration, the electrons held by the floating gate can leak out completely, changing the data that is being stored inside of the floating gate [12].

This deterioration and the resulting problems that stem from it cause obvious issues with the longevity of any data storage devices using the FGT. To increase performance, many different dielectric materials have been investigated. This research has become especially important with the creation of Multi, Triple, and Quad level cells, which store more than just one electron per floating gate. However, as a result, the floating gates in these cells endure much more wear and tear. To mitigate this problem, several advancements have been made and there are quite a few materials that have been proposed to work better than the oxides that are commonly used now.

In his paper, “Dielectric Scaling Challenges and Approaches in Floating Gate Non-Volatile Memories,” Stephen Keeney introduces one such material that has been proposed to replace the oxides, which is called a nanocrystal. These nanocrystals would be used to store electrons in much the same way that the crystals are used to store electrons in quantum entanglement experiments [12]. Theoretically, if done this way, there will be no deterioration of the crystals. However, in practice there is some slight damage, but it is miniscule compared to the damage done by the presently used Fowler-Nordheim tunneling. There are some large drawbacks to the use of metallic nanocrystals though, the biggest of which is the concern for metal contamination, this is where semiconducting materials, such as those used in all small chip and data storage drives, become affected by a different material. An example of this would be the intermixing of the nanocrystal material properties and the properties of a new material, which would result as different from the original. This causes a divergence from what the expected performance is, and in extreme cases, it can cause serious malfunctions in chips.

Since these FGT are being used to store data, it is not in anyone’s best interest to have the chips working differently than the way in which they were designed. If the chip works in a way contrary to its design data stored on the memory could be unintentionally corrupted. In addition, some studies have called into question the effectiveness of metallic nanocrystals as an insulator. If the nanocrystal cannot properly insulate its stored electrons, there is no guarantee that the data stored will remain uncorrupted from outside sources. The more important problem with metallic nanocrystals, however, is that they are much more expensive to produce and implement than the currently used oxides [12].


Reliably Erasing Data
Although speed and reliability are important factors to consider in analyzing a new technology, they take a back seat when the security of the user is in question. For the HDD, there are tried-and-true methods that have been developed to overwrite single files and entire drives as well. When SSD were making their first appearance in the consumer market in, the University of California in San Diego took on the question of whether the methods that work with the HDD could be applied to reliably erase data from an SSD. They published an essay that empirically investigates the effectiveness of various methods of erasing data, which they entitled, “Reliably Erasing Data from Flash-Based Solid State Drives.” To summarize their findings briefly, they determined that there is a greater risk to personal security with the currently known methods to erase data on an SSD [13].

In their investigation, the researchers began with investigating techniques to wipe the memory from an entire SSD, which fall under two categories: keeping the drive usable and destroying it [13]. Most of the time, a user will want to “overwrite” the data from their SSD and then continue to use it. To accomplish this, the researchers tested the drive’s on-board commands. After testing, they found that “while overwriting appears to be effective in some cases across a wide range of drives, it is clearly not universally reliable. It seems unlikely that an individual or organization expending the effort to sanitize a device would be satisfied with this level of performance” [13].

The second option of wiping the memory from an entire SSD is to “degauss” it. In this method, the drive and the memory it contains are both destroyed. This process would be effective if the user is trying to get rid of a drive they have without having to worry about someone accessing the memory it contained. Degaussing is performed by blasting the drive with alternating magnetic fields of 8,000 and 20,000 gauss. However, when the researchers performed this on a sample SSD, they found that the “data remained intact” [13].

However, it is quite rare that a user will need to erase all the data stored on a SSD more than once in its lifetime. The more common practice is to delete specific files to free-up memory or get rid of sensitive information. In both cases (especially the latter), it is very important that the desired data be erased effectively.

When the researchers tested various methods to “scrub” data files, they got back positive results. While some of the methods to delete files were more time-effective than others, they all could erase memory fairly quickly. They concluded that, “the time to scrub 1 GB varies, but in all cases the operation takes less than 30 seconds” [13]. Their results are promising because it shows that although not all methods of deleting single files worked, most of them were fast and efficient.

From the University of California researchers’ findings, it is apparent that SSD still need some improvement to catch up to the HDD in terms of reliably erasing data. However, it is promising that “commands are effective when implemented correctly [13]. Not every method the researchers tested worked, but with careful implementation and development, improvements to the SSD’s data erasing techniques will be able to securely delete the most sensitive information.


A NEW TAKE ON OLD STORAGE METHODS
Although NAND flash memory provides an insight into more efficient computing solutions, it is not the answer to all our problems just yet. The existing technology in the HDD is still more effective in some areas of data storage. The main comparison between the two data storage methods are cost and computing power. The HDD is more cost effective, while the SSD has greater computing power.
Benefits of HDD over SSD
In the area of mass data storage, the old technology is still more desirable. For the most part, data access speeds are not crucial when it comes to storing large amounts of information, since the data is not being access very often. Also, cost is a very important factor since it is going to take many high-capacity units to store the necessary information.

The most striking feature that separates the HDD from the SSD is the price. On average, an SSD is four times more expensive than a HDD [14]. Another distinct advantage of the HDD is its ability to effortlessly overwrite information. Unlike SSD, where the data needs to be erased before new data can be input, the HDD is able to directly overwrite information [15]. However, beyond these two benefits, the HDD does not have any further distinct advantages. The HDD is a technology that is readily available and cost-effective, which is the main reason why it is still being used for mass storage today. If large quantities of HDD need to be purchased, the better price is almost always more alluring than the increased speed and computing power of the SSD.


Benefits of SSD over HDD
For most current high-tech devices, an HDD will no longer cut it. Stemming from its lack of moving parts, the SSD has improved speed and computing power over the HDD, which makes it a necessity in everything from cameras, to smartphones, to computers.

Arguably, the largest improvement that comes with the SSD is its improved computing speeds. According to computer engineers from Ajou University in Korea, an SLC based drive can read data at 100 MB per second, a MLC based drive clocks in at 220 MB per second, and the HDD can read at 76.5 MB per second [15]. These numbers may seem like a bit of a complicated comparison to make, but it is important to note that the SSD is able to immediately access files without having to move a mechanical arm to find the correct location of the file. To access a single file, the SSD can gain an 8.9 second advantage [15]. When it comes to computing speeds, where multiple files are being accessed in rapid fashion, the lack of time to seek the file gives the SSD a huge speed advantage.

Another benefit of the SSD that is brought about by its lack of moving parts is it improved reliability. From the same article, the engineers analyze the Mean Time Between Failure (MTBF) for SLC based, MLC based, and HDD memory devices. The SLC and MLC based drives had MTBF of 2,000,000 and 1,000,000 hours, respectively, while the HDD measured at 600,000 hours [15]. For personal computers and devices, this statistic would not matter, since the existing technology would be obsolete by the time the drive would hit 500,000 hours (about 57 years). However, this would have a bigger impact in the application of mass storage. Since the MTBF measures the mean between failures, it means there will be failures that happen earlier in the time span while others will happen later. When the MTBF really comes into play is with data storage facilities that house thousands of drives each, with each of those drives being susceptible to a similar MTBF. A longer time between failures will allow the mass data storage to minimize corrupted data files.

The final benefit of SSD is its improved sustainability in the area of environmental friendliness. Not only do they require less material to produce due to their smaller size, they also require less power to function. In a study performed by Tom’s Hardware in 2013, they explored the total bits of information that could be read/written to a HDD using one Watt of power. After analyzing dozens of HDD, they concluded that the most efficient performed at 49.40 inputs and outputs per Watt [16]. In a similar study also performed by Tom’s Hardware in 2014, they studied the inputs and outputs per Watt of dozens of SSDs. Once the tests were complete, they found that the most efficient SSD performed at 25,453.43 inputs and outputs per Watt [17]. Not only is the SSD less power-hungry than the HHD, it is 515 times less power-hungry. This substantial difference in energy requirement alone makes a considerable difference in the sustainability of SSDs.

Overall, while HDD have their applications, mainly in mass storage, SSD have benefits that make them desirable for most computing applications. The biggest roadblock from utilizing SSD in all aspects of data storage is its higher cost. The benefits are alluring, but for systems that are already utilizing HDD in their operation, it might not be worth the time and money to switch to the newer technology. When it comes to new devices, however, SSD are necessary because they improve computing speeds while also decreasing energy usage.
FLASH MEMORY BEYOND THE SSD
At this point, it is apparent that NAND based flash memory has brought about great changes for personal and industrial computing systems, but its effects are more far-reaching than just that. The average person will most likely interact with a device that uses flash memory several times a day. There are all sorts of applications for flash memory in consumer technology, but some of the most influential are the smartphone, camera, and flash drive.
Smartphones
One of the most far-reaching modern devices that is brought about by the invention of NAND flash memory is the smartphone. Since these devices take up such a small space and are constantly requiring memory to be accessed from various locations, flash memory revolutionizes the size and potency of technology inside of phones. According to a paper presented at the Design Automation Conference in 2015, smartphone applications are “switched much more frequently than those in desktops or servers” [18]. Before flash memory was created, it would not be possible to access these different data locations in a timely manner. According to the same paper, “while an application is accessing hot data in one moment, another application might be launched in the next moment access other data” [18]. If the smartphone were developed using a mechanical hard drive for data storage, the length of time for the drive’s arm to find the location of the data would be unbearably slow. Since there are a lot of people who use smartphones on a daily basis, it would not be a stretch to say that flash memory has improved many people's’ lives.
Cameras
Obviously, cameras have existed for a much longer length of time than flash memory, but it was not until the invention of NAND-based flash memory that the digital camera could be invented. With the old form of cameras, the picture was either immediately printed out (an example would be Polaroid cameras) or the picture was printed onto a roll of film. With digital cameras, the pictures are stored on an SD card, allowing the photographer to hold considerably more pictures.

The chain of development for the digital camera goes as follows: flash memory allowed for the creation of the SD card, which allowed for the creation of the digital camera. According to an article in the IEEE Consumer Electronics Magazine, the SD card is made from a flash memory chip, a controller, and an interface card. The flash memory chip acts as the data storage portion of the device. The controller works to control the flow of data in and out of the SD card as well as help correct errors that come up. Finally, the interface card is the portion that connects the SD card to the host machine, allowing data to be exchanged freely [19]. A visual representation of this process is shown in Figure 4.


https://lh5.googleusercontent.com/-o18jo2gjum4flmqsc4fb5eqqwd9z-boseaaslslo78hdekyxrwy7lkwstwd4espmemyq8kecmddznzaxm7a7oscgyu488t328ccyqct6carhqzruxbi9pjuqrguzjs6ln6jdetz
FIGURE 4 [19]

A dissection of an SD card with labels.
Another benefit associated with the usage of flash memory in cameras is the reduction of waste produced, which improves their overall sustainability. Older cameras came in two variations: disposable and photographic film-based. The disposable cameras could take a finite number of photos before needing to go to the store to develop the pictures taken. However, once the photos were developed, the entire camera was thrown away. The second option was photographic film-based cameras, which were loaded with film canisters as their method to store photos. The film canisters were small cylindrical units stored a finite number of photos before needing replaced and the film developed. In contrast, according to SanDisk’s website, a digital camera with an 8 gigabyte SD card can hold 2288 pictures that are 8 megapixels each [20]. Along with the increased capacity, digital cameras allow the user to transfer the taken pictures to a computer before having them printed, which cuts down on the required ink and paper to produce copies of photographs. Along with the environmental friendliness associated with less pictures being printed, it also saves the consumer money since they do not need to print unwanted pictures.

Comparing flash memory-based cameras to non-flash memory-based cameras, the benefits of flash memory are not hard to spot. Improved photo capacity, sustainability, and convenience of the digital camera all contribute to the popularity of the digital camera.


Flash Drives
Another simple, yet effective innovation brought about by the creation of flash memory is the flash drive (USB stick). The main idea is this: flash drives work like small SSDs. According to the same article in the IEEE Consumer Electronics Magazine, flash drives are usually created from 3-bit memory cells, which “increase the storage capacity but reduce the number of times the device can be written before the cells wear out and no longer store new information” [19]. The fact that flash drives are built to maximize data storage and minimize cost makes sense, since they are widely available to consumers. The technology contained in the small plastic drives is not revolutionary, but it considerably improved the portability of consumer-level data transfer.
THE FUTURE OF FLASH MEMORY
From its greater reading and writing speeds and its improved reliability over mechanical memory, flash memory is where the future of data storage is headed. From a sustainability standpoint, flash memory improves productiveness and quality of life. The improved design over the mechanical memory allows users in working and recreational environments to spend less time waiting for files to be accessed and edited. The improved design begins from the smallest unit: the FGT. Based on the number of electrons that are stored in the transistor, the flash memory device can read either a “1” for occupied or a “0” for empty. The transistors are then arranged into chains, which are translated into longer lines of binary. This arrangement of FGTs is considered NAND based flash memory, which is used in SSD. Comparing the old standard for data storage, the HDD, to the SDD, there are both benefits and downfalls. The main disadvantage of the SSD is its greater price, which makes it a less desirable choice for mass storage solutions. However, the advantages of SSD include greater reading and writing speeds, greater reliability, and lesser computing errors. All in all, the benefits of flash memory far outweigh its downfalls. When it comes to data storage in both personal and industrial settings, NAND based flash memory is the future.


SOURCES
[1] “Timeline of Computer History.” Computer History Museum. 2017. Accessed 2.19.2017. http://www.computerhistory.org/timeline/memory-storage/http://www.computerhistory.org/timeline/memory-storage/

[2] S. Ye. “Smartphone Futurology, Part 3.” Android Central. 1.29.2015. Accessed 3.28.2017.

http://www.androidcentral.com/smartphone-futurology-3-chips

[3] “Floating Gate Transistor.” Wikimedia Commons. 1.10.2015. Accessed 3.28.2017. https://commons.wikimedia.org/wiki/File:Floating_gate_transistor-en.svg

[4] E. Takeda, N. Suzuki. “An Empirical Model for Device Degradation due to Hot-Carrier Injection.” IEEE. 8.9.2005. Accessed 2.26.2017. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1483411

[5] T. Shwarz. “Floating Gate Basics.” Santa Clara University. 2003. Accessed 3.1.2017 http://www.cse.scu.edu/~tschwarz/coen180/LN/flash.html

[6] K. Takeuchi, T. Hatanaka, S. Tanakamaru. “Highly reliable, high speed and low power NAND flash memory-based Solid State Drives (SSDs).” IEICE Electronics Express. 4.25.2012. Accessed 1.25.2017. https://www.jstage.jst.go.jp/article/elex/9/8/9_8_779/_pdf

[7] A. P. Gemora. “TLC is Becoming the Mainstream for Solid State Drives in the Consumer Market.” Ilonggo Tech Blog. 2.29.2016. Accessed 2.28.2017. http://www.ilonggotechblog.com/2016/02/tlc-is-becoming-mainstream-for-solid-state-drives-in-consumer-market.html

[8] S. Lowe. “A Flash Storage Technical and Economic Primer.” Flash Storage. 5.30.2015. Accessed 1.11.2017. http://www.flashstorage.com/flash-storage-technical-economic-primer/

[9] S. Toumi, Z. Ouennoughi, K.C Strenger, L. Frey. “Determination of Fowler–Nordheim tunneling parameters in Metal–Oxide–Semiconductor structure including oxide field correction using a vertical optimization method” Science Direct. 4.29.2016. Accessed 3.1.2017. http://www.sciencedirect.com/science/article/pii/S0038110116300168

[10] “Embedded Systems Course- Module 16: Flash Memory Basics and Its Interface to a Processor.” EEHerald. 12.30.2016. Accessed 3.28.2017. http://www.eeherald.com/section/design-guide/esmod16.html

[11] W. Choi, H. Kwon, Y. Kim, B Lee, H. Yoo, S. Choi, G. Cho, S. Park. “Influence of Intercell Trapped Charge on Vertical NAND Flash Memory” IEEE. 12.21.2016. Accessed 3.1.2017. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7792672&tag=1

[12] S. Keeney. “Dielectric Scaling Challenges and Approaches in Floating Gate Non-

Volatile Memories” Electrochemical Society. 4.2004. Accessed 2.26.2017. https://www.electrochem.org/dl/ma/206/pdfs/0866.pdf

[13] M. Wei, L. Grupp, F. Spada, S. Swanson. “Reliably Erasing Data from Flash-Based Solid State Drives.” Usenix. Accessed 1.10.2017. https://www.usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf

[14] J. S. Domingo. “SSD vs. HDD: What’s the Difference?” PC Magazine. 11.9.2016. Accessed 2.28.2017. http://www.pcmag.com/article2/0,2817,2404258,00.asp

[15] S. S. Rizvi, T. Chung. “Flash SSD vs HDD: High performance orient modern embedded and multimedia storage systems.” IEEE. 4.16.2010. Accessed 2.20.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/document/5485421/

[16] “Performance per Watt Database.” Tom’s Hardware. 2013. Accessed 3.28.2017. http://www.tomshardware.com/charts/hdd-charts-2013/-31-Performance-per-Watt-Database,2919.html

[17] “Performance per Watt.” Tom’s Hardware. 2014. Accessed 3.28.2017. http://www.tomshardware.com/charts/ssd-charts-2014/Performance-per-Watt,2815.html

[18] R. Chen, Y. Wang, J. Hu, D. Liu, Z. Shao, Y. Guan. “Unified non-volatile memory and NAND flash memory architecture in smartphones.” IEEE. 1.19.2015. Accessed 2.19.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/document/7059028/

[19] T. Coughlin. “Making the Connection [The Art of Storage].” IEEE. 4.11.2016. Accessed 2.19.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/document/7450739/

[20] “Number of Pictures That Can Be Stored on a Memory Device.” SanDisk. Accessed 3.29.2017. https://kb.sandisk.com/app/answers/detail/a_id/69/~/number-of-pictures-that-can-be-stored-on-a-memory-device


ADDITIONAL SOURCES
J. Tjioe, A. Blanco, T. Xie, Y. Ouyang. “Making Garbage Collection Wear Conscious for Flash SSD.” IEEE. 6.2012. Accessed 1.25.2017. http://ieeexplore.ieee.org/document/6310885/

M. Fabiano, G. Furano. “NAND flash storage technology for mission-critical space applications.” IEEE. 10.2.2013. Accessed 1.25.2017. http://ieeexplore.ieee.org/document/6617096/?arnumber=6617096

S. Aritome. “NAND Flash Memory Revolution.” IEEE. 5. 2016. Accessed 1.25.2016. http://ieeexplore.ieee.org/document/7495285/

S. Boyd, A. Horvath, D. Dornfeld. “Life-Cycle Assessment of NAND Flash Memory.” IEEE. 10.14.2016. Accessed 1.25.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/document/5601793/



W. Akin. “Understanding NAND’s Intrinsic Characteristics Critical Role in Solid State Drive (SSD) Design.” IEEE. 5. 2015. Accessed 1.25.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/document/7150310/
ACKNOWLEDGEMENTS
We would like to thank Alyssa Srock, the co-chair for our conference session, for her assistance in directing our ideas and explaining the requirements of the conference paper more clearly. Next, we would like to thank Professor Prymus for her helpful comments and suggestions on improvements we could make for the paper through each step of the process. Then, we would like to thank our conference chair, Mr. Wunderley, for his time and help throughout the writing process of our paper. Finally, we would like to thank the University of Pittsburgh’s Swanson School of Engineering for allowing us to participate in the Seventeenth Annual Freshman Conference.



University of Pittsburgh Swanson School of Engineering

Submission Date 03.31.2017


Download 55.78 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page