Chapter 12 Remote Sensing Systems

You probably know that you are using your very own organic remote sensing system to read this sentence. Our eyes take in information from the world around us by detecting changes in light and relaying that information through the optic nerve into our brains, where we make sense of what we are seeing. As you learned in Chapter 11, this is what constitutes remote sensing - gathering information (“sensing”) without directly measuring or interacting with that information (“remote”). Whereas our eyes are limited to the visible light portion of the electromagnetic spectrum and by the location of our bodies, remote sensing systems use powerful sensors and flight-equipped platforms to paint a broader and deeper picture of the world around us. The picture in figure 12.1 is an example of the beautiful imagery we can capture from space, taken from the GOES-1 satellite.


North and South America as seen from the NASA GOES-1 satellite. Captured from KeepTrack.space via NASA. [Copyright (C) 2007 Free Software Foundation, Inc.](https://fsf.org/)

Figure 12.1: North and South America as seen from the NASA GOES-1 satellite. Captured from KeepTrack.space via NASA. Copyright (C) 2007 Free Software Foundation, Inc.


Remote sensing systems range in size and complexity from a handheld camera to the Hubble telescope and capture images of areas ranging from a few metres to several kilometres in size. Though devices such as microscopes, x-ray machines, and handheld radios are technically remote sensing systems, the field of remote sensing typically refers to observing Earth on a small spatial scale (1:100 to 1:250,000).


NOTE

Remember, in spatial scale, “small” means a big picture. If you want a refresher on how to read and understand map scales, check out section 2.5 in Chapter 2.


The range of uses for remote sensing platforms are dazzling in number, allowing us to monitor severe weather events, ocean currents, land cover change, natural disturbances, forest health, surface temperature, cloud cover, urban development, and so much more with high precision and accuracy. In this chapter, we will break down the how and where of remote sensing systems and discover a few different systems used for Earth observation today.


Your Turn!

What other “remote sensing systems” can you think of that you use day-to-day?


Learning Objectives

  1. Break down remote sensing technology into its basic components
  2. Understand how different settings and parameters impact remote sensing system outcomes
  3. Review the key remote sensing systems used in Canada and around the world for environmental management


Key Terms

Absorption, aerial, curvature, focus, geostationary, hyperspectral, LiDAR, multispectral, nadir, oblique, orbit, panchromatic, radiometric, reflection, refraction, resolution, spectral, sun-synchronous, thermal

12.1 Optical System Basics

Remote sensing systems contain a number of common components and operate using similar principles despite their difference in looks. In the next section, we will discover the technical specs of remote sensing systems that allow them to “see” the images all around us.

12.1.1 Lenses

Picture the view from a window onto a busy street on a rainy day: you have cars driving by with headlights, traffic lights reflecting off a wet road, raindrops pouring down the windowpane and distorting the view, and hundreds of people and objects on the street scattering light beams in every possible direction from a huge range of distances. In order for us to take in any of this, these light beams need to reach the retina, the photosensitive surface at the very back of the eye.How do our relatively tiny eyeballs take in all that disparate light and produce crystal clear images for our brains? By using one of the most basic components of any optical system: a lens. A lens is a specially shaped piece of transparent material that, when light passes through it, changes the shape and direction of light waves in a desired way.


NOTE

The property of transparent mediums to change the direction of light beams is called refraction, or transmittance. This is why objects in moving water look all wibbly and misshapen. The arrangement of molecules within a medium disrupts both the direction and speed of the photons – the measurement of this disruption is called the refractive index.


The lens at the front of your eyeball changes refracts light beams from varying distances precisely onto your retina. The optical systems on remote sensing platforms are the same: they use a specially designed lens to focus light beams at the desired distance to onto their own recording medium. Below in figure 12.2 is a simple visualization of how optical systems focus light onto a desired point to produce an in-focus image.


Selected material from [How Focus Works](https://www.bhphotovideo.com/explora/photography/tips-and-solutions/how-focus-works). Copyright Todd Vorenkamp. Used with permission.)

Figure 12.2: Selected material from How Focus Works. Copyright Todd Vorenkamp. Used with permission.)


Now, picture another scene: you are scrolling through social media and you see a beautiful photo of Mt Assiniboine taken by your friend, a professional photographer based in the Canadian Rockies. You think to yourself, “Wow, that peak looks ENORMOUS! I want to visit there and see it for myself.” So, you ask your friend exactly where they went, drive to Banff National Park, hike to the very same spot, and squint skywards. Hmm…though the mountain is still imposing, it is certainly not towering over you at close range as it was in the photo. You also notice that there are several surrounding peaks that you couldn’t see before. Your friend’s picture was crystal-clear, and you have 20/20 vision, so you know its not an issue with focusing properly. Are you being deceived?

Actually, yes, you are – by both your eyes and the camera your friend used. We like to think that our eyes show us the world as it truly is and that everything else is a facsimile, but in truth, all optical systems alter the scenes around us to show us what we need to see. From an evolution standpoint, you can see why clear resolution of close-range objects would be of vital importance for humans – think distinguishing edible plants from poisonous ones, hunting prey, reading facial expressions, etc. We can make out human-sized objects up to a distance of three kilometres in good lighting (https://www.livescience.com/33895-human-eye.html), but if you are interested in seeing something far away, such as a mountainside or a celestial body, you’ll have to trade in your natural close-range viewing abilities for a system specialized for distant details – e.g., binoculars or a telescope. The distance at which objects can be resolved and how they appear in an image lies with the lens. Read on below to learn about how different lens designs influence the appearance of a scene or object, and keep in mind how these designs may be used in various earth observation applications.

Most, if not all, lenses on optical systems for remote sensing are spherical lenses, called that because each side of the lens is spherical in shape, similar to a bowl. A convex optical surface curves outward from the lens centre, whereas a concave optical surface curves inward toward the lens centre. Though not spherical, a planar or flat optical surface may be used as well. A spherical lens is formed by joining two optical surfaces – concave, convex, and/or planar – back-to-back. A biconvex or positive lens is two convex surfaces, and a biconcave or negative lens is - you guessed it - two concave surfaces. The radius of curvature is the measure of how much an optical surface “bulges” or “caves.” If you imagine tracing the edge of the surface in an arc and continuing the curve all the way around in a circle, the radius of this imagined circle would be the radius of curvature. Biconvex and biconcave lenses can be “equiconvex,” meaning they have the same spherical curvature on each side, but may also have uneven curvatures. The lens in the human eye is an example of a lens with uneven curvatures – our radius of curvature is higher at the front. Figures 12.3 and 12.4 demonstrate how the radius of curvature is measured for both concave and convex optical surfaces.


Measuring the radius of curvature for a convex optical surface. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)

Figure 12.3: Measuring the radius of curvature for a convex optical surface. Claire Armour. CC BY 4.0


Measuring the radius of curvature for a concave optical surface. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)

Figure 12.4: Measuring the radius of curvature for a concave optical surface. Claire Armour. CC BY 4.0


Your Turn!

What is the radius of curvature for a perfectly flat lens? See answer at end of chapter.


12.1.2 Focal Length

So, now that we know a little bit about what lenses look like, its time to figure out exactly how influence an image’s journey from the front of the lens to the recording medium. As you might expect, different combinations of optical surfaces and radii of curvature will behave in different ways. Remember that for an image to be in-focus, we need to ensure the light beams are landing precisely on the recording medium or screen.

Convex optical surfaces cause light beams to converge, or focus, to a point behind the lens.

Concave lenses cause light beams to diverge, or spread out, resulting in the light appearing to converge (focus) to a point in front of the lens.

The point where the light converges or appears to converge is called the focal point, and the distance between the focal point and the centre of the lens is called the focal length. For a converging lens, the focal length is positive; for a diverging lens, it is negative. Figures @ref(fig:12-convex-focal-diagram.png) and @ref(fig:12-concave-focal-diagram.png) illustrate the behaviour of light when travelling through a biconvex and biconcave lens. See the paragraph below the diagrams for variable labels.


Measurements in a biconvex lens. [User:DrBob](https://en.wikipedia.org/wiki/User:DrBob). [CC BY 3.0 Unported](https://creativecommons.org/licenses/by/3.0/)

Figure 12.5: Measurements in a biconvex lens. User:DrBob. CC BY 3.0 Unported


Measurements in a biconcave lens. [User:DrBob](https://en.wikipedia.org/wiki/User:DrBob). [CC BY 3.0 Unported](https://creativecommons.org/licenses/by/3.0/)

Figure 12.6: Measurements in a biconcave lens. User:DrBob. CC BY 3.0 Unported


The optical power of a lens – the degree to which it can converge or diverge light – is the reciprocal of focal length. Essentially, a “powerful” lens will be able to refract light beams at sharper angles from the horizontal, causing them to converge or appear to converge closer to the lens, i.e., at a smaller focal length. The Lensmaker’s Equation (Equation 1) allows us to calculate the focal length (\(f\)) and/or optical power (\(\frac{1}{f}\)) as a function of the radii of curvature (\(R\)), the thickness of the lens between the optical surfaces (\(d\)), and the refractive index of the lens material (\(n\)). Note that \(R_1\) is the front surface - the side of the lens closest to the origin of the light - and \(R_2\) is the back surface.

Equation 1: [ = (n-1)[ - + ]]

This Wolfram-Alpha demonstration allows you to manually adjust the parameters of the Lensmaker’s Equation and see the outputs.

We know how lenses impact focal length, but how does focal length impact a photo? Let us return to our scenario in Banff National Park where we have two mismatched images of the same mountain. You’ve done some investigating and found that the camera your friend used is very large (and expensive). The lens at the front is quite far from the recording medium in the body of the camera – many times the distance between your own eye lens and recording medium, the retina. This difference in focal length is the cause of the differing images. A low optical power lens with a long focal length will have high magnification, causing distant objects to appear larger and narrowing the field of view (see section 12.something). A high optical power lens with a short focal length, such as your eye, will have low magnification and a larger field of view by comparison. Mystery solved!


Your Turn!

When you return from your trip, your friend decides to test you on your new skills and shows you these additional photos they took of Mt Assiniboine in nearly identical spots on the same day. They ask you which photo was taken with a longer camera lens. How can you know? Try this: from the peak of Mt Assiniboine (the very big one), draw a line straight downwards or cover half of the photo with a piece of paper, and then do the same for the other photo. Does the line or paper edge intersect at the same points of the foreground in each photo? Can you see the same parts of the mountains in the foreground? Use the rock and snow patterns for reference. If the cameras were the same focal length, even with different cropping and lighting as seen here, the answers should both be yes. Can you tell which photo was taken with a 67mm lens and which was taken with a 105mm lens? See the answers at the end of the chapter.



12.1.3 Principle Points

Wondering if this section is necessary. In order to explain principle points, we would need to explain principle planes, which requires an in-depth explanation of lens shapes, light physics, refraction, and sign conventions of light rays, which we truncated from the previous section. I suggest removing it if you don’t think it is key to the chapter.

12.1.4 Field of View

Though you are looking at a screen, there are likely other objects you can see above, below, and beside it. If someone walks into the room behind you, you may turn your head to look at them. If you hear the phone ringing in another room, you may walk over to see who is calling. All of these scenarios represent different uses of your field of view (FOV), or what you can see in a given moment.

Building and launching remote sensing systems is very costly, so its in our best interest to have them see as much as possible with the least expenditure of effort. We can maximize the FOV of a remote sensing system by giving it the freedom to “look around.” Humans have three degrees of motion that allow us to change our FOV: scanning with our eyes, swiveling our heads, and shifting the position of our bodies. Optical systems can have three analogous degrees of motion: the motion of the lens elements (eyes), the motion of the camera (head), and the motion of the platform (body).

Examples of optical systems with different degrees of motion are given in table ??.


Degrees.of.motion Moving.sections Example
Zero (0) None Viewing wildlife through a remote camera trap mounted on a tree
One (1) Lens elements Adjusting the zoom of a camera mounted on a stationary object
One (1) Camera Looking through a floor telescope at various constellations
One (1) Platform Looking through handheld binoculars on a walk in nature
Two (2) Lens elements + Camera Using a wall-mounted security camera to look around a scene
Two (2) Lens elements + Platform Taking video of forest canopy with an RPAS with a zoom-enabled camera
Two (2) Camera + Platform Using a scanning camera on an airplane or satellite to capture aerial imagery
Three (3) Lens elements + Camera + Platform Using a camera to film a moving object, such as an athlete on a sports field


Remote sensing systems typically have zero, one, or two degrees of motion. Rarely do optical systems have all three. There are two reasons for this: first that its not necessary to have so much adjustment, and second that more moving parts means a higher possibility of malfunction, which is a real headache when the defunct system could be 700 km above the Earth’s surface.

When a remote sensing system has two or more degrees of motion, we introduce a second measure of FOV called the instantaneous field of view, or IFOV. The FOV tells us what the remote sensing system is capable of seeing through its entire range of motion. The IFOV tells us what the remote sensing system can detect in a single given moment in that range of motion. Frequently, the IFOV of a system is a single pixel.

FOV and IFOV are measured as the angle subtended by the camera


Did You Know?

Another hot summer day, another pesky fly buzzing around your home…and no matter how sneaky your approach is, the fly always seems to evade you and your swatter at the last possible second. You may be pleased to learn that it’s not your slow reflexes foiling you, but the fly’s advanced eyesight and a phenomenon called compound vision. Their relatively enormous “eyes” are in fact hundreds of smaller optical receptors called ommatidia that provide a nearly 360-degree view of the fly’s surroundings at any given time (https://www.nikonsmallworld.com/galleries/2019-photomicrography-competition/housefly-compound-eye-pattern). This huge field of view allows flies to steer through dense vegetation and, importantly, avoid being crushed by the latest print edition of the local newspaper.

12.2 Perspectives

All sighted creatures that we know of - save those from the water-dwelling genus Copepoda (https://askdruniverse.wsu.edu/2016/05/31/are-there-creatures-on-earth-with-one-eye/) - have two or more eyes. As the eyes are at different locations in space, each eye perceives a slightly different image. We also have precise information on the location of our eyes, the angle of our heads, and their distance to the ground surface. Our brains combine this information to create a three-dimensional scene. Our binocular (“two-eyed”) vision means we can estimate the size, distance, and/or location of most objects - no further information needed.

However, almost all remote sensing systems have monocular (“one-eyed”) vision, which limits them to producing flat, two-dimensional imagery. Using the image alone, we cannot readily measure the size, distance, and location of objects in a scene, nor can we compare it with other images in that location - a must for earth observation applications! Much like the auxiliary information our brain uses to create a three-dimensional scene, we can make a two-dimensional image “spatially explicit” by measuring the following:

  1. The precise location of the camera in three-dimensional space
  2. The positioning or perspective of the camera

Finding the camera location is fairly straightforward. We can use a Global Positioning System (GPS) to record our exact coordinates. Depending on the platform - terrestrial, aerial, or spaceborne - we can use various tools to record the platform’s height, altitude, and/or elevation.

The camera perspective, including lens angle and direction, heavily influences how objects are perceived in imagery. Similarly to how accidentally opening your phone camera on selfie mode is not ideal for a flattering photo of your face, there are favourable perspectives for observing different natural phenomena. It’s therefore of high importance to carefully select the best perspective for the desired use of your imagery.

The precise angle of the camera is also crucial. Thinking back to map projections in Chapter X, you will recall that representing our three-dimensional planet in a two-dimensional space causes certain regions to be heavily distorted in shape and size. You will also recall from earlier in this chapter how a camera’s optical power changes the way objects at varying distances are seen.

There are four camera perspectives used for Earth observation discussed here: aerial, nadir (pronounced NAYD-er), oblique and hemispherical. Each one is briefly explained below with photos and some example applications.

12.3 Aerial Perspective

The plane of the lens is perpendicular to the ground plane and the lens vector is pointed straight downwards at the ground. Figure 12.7 is an example.

Aerial imagery can be taken from remotely piloted aircraft systems (RPAs), airplanes, or satellites and thus has a huge range of resolutions and area coverage. It’s highly sensitive to adverse weather, cloud cover or poor air quality, and variable lighting, so it needs to be carefully timed or collected at frequent intervals to account for unusable data.

Common applications: - Mapping land cover and land use - Assessing ecosystem disturbance frequency and severity - Calculating indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Difference Burn Ratio (NDBR) - Collecting climate and weather data - think thermal maps, storm tracking, coastline changes, etc

It’s important to note that the “ground plane” refers to a plane tangent to the geoid and not the physical ground surface. In variable terrain such as mountains, much of the ground will be seen “at an angle,” but the overall camera perspective is unchanged. See figures 12.8 and 12.9 for a visualization of what this looks like with regards to aerial imagery.

How aerial imagery should be taken. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)

Figure 12.8: How aerial imagery should be taken. Claire Armour. CC BY 4.0

How aerial imagery should NOT be taken. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)

Figure 12.9: How aerial imagery should NOT be taken. Claire Armour. CC BY 4.0

12.4 Nadir Perpsective

The plane of the lens is perpendicular to the ground plane and the lens vector is pointed straight upwards towards the sky. Figure 12.10 is an example of a nadir image.


For terrestrial ecosystem applications, nadir imagery will be taken from a terrestrial platform such as a handheld camera or LiDAR apparatus or a sub-canopy drone. Nadir imagery is less sensitive to cloud cover but will still be impacted by adverse weather and variable lighting. For example, an unmonitored camera placed on or near the ground can easily become obscured by dirt, water, or snow. Nadir perspective photos are ideally suited for capturing imagery where both the sky and structures on the ground need to be visible.

The term nadir has a second meaning which is unrelated to perspective: it refers to the point on the ground plane directly below a platform, regardless of that platform’s angle or perspective. For a scanning platform, the nadir is the lowest point in the scanning arc, typically on the ground. If you looked straight downwards at your feet in a standing position - an aerial perspective - you would your feet would be the nadir point. If you were laying on the ground looking at the sky - a nadir perspective - the place where your head rests on the ground would the the nadir point.

The opposite of nadir is the zenith. In the same analogy as before, if you were to stare at your feet from a standing position, the zenith would be where your head is. If you were to lay on the ground and look at the sky, the zenith would be directly above your head. Since there is no “sky plane” or point of reference in the upwards direction where we can clearly define the zenith height, this term more commonly refers to the highest point in the scanning arc for a moving platform. Think of it as a swing set: the starting position, where the swing is closest to the ground, is the nadir of your arc; the high point, where you slow to a stop and begin swing the other way, is the zenith.

Common applications: - Determining crown closure or canopy cover - Viewing branch networks - Measuring Leaf Area Index (LAI) for the upper canopy - Astronomy and cosmological observation

12.5 Oblique Perspective

The plane of the lens is not perpendicular to the ground plane - essentially, if it’s not aerial and it’s not nadir, it’s oblique. The vast majority of imagery you seen on social media, in the news, or on TV is from an oblique perspective. Oblique imagery is ideally suited for comparing object sizes or viewing areas which would be otherwise occluded in aerial imagery. Nearly all terrestrial platforms take oblique imagery and it is readily used for airborne and spaceborne platforms. A scanning platform will have an oblique perspective when it is not at the nadir or zenith of its scan arc. Figure 12.11 is an example of an oblique image of a natural area.

Common applications: - Viewing and measuring forest understorey and mid canopy - Assessing post-disturbance recovery - Assessing wildfire fuel loading - Providing context for aerial and nadir imagery - Comparing individual trees or vegetation

12.6 Hemispherical Perspective

A hemispherical perspective has less to do with camera positioning and more to do with field of view, but it is still a “perspective.” It captures imagery in the half-sphere (hemisphere) directly in front of the lens. The radius of the hemisphere is dependent on the lens size and optical power of the hemispherical lens used in the camera. Figure X below visualizes a hemispherical perspective.

Due to the unusual shape of the lens, it captures a much larger proportion of a scene than we could normally take in without swiveling our heads or stitching photos into a mosaic, such as a panorama. A hemispherical lens will produce a circular rather than rectilinear output. The lens curvature will cause objects in the image to be highly distorted and, unlike rectilinear photos, cannot be easily divided into pixels for analysis. However, hemispherical perspectives are uniquely suited to viewing large expanses of a scene all at once. For this reason, it is highly favourable for sports cameras, security cameras, and natural monitoring. Figures ?? and ?? shows hemispherical perspectives from two very different angles.



Common applications - Astronomy and cosmological observation - Tracking road and trail usage by wildlife, humans, and/or transport vehicles - Measuring Leaf Area Index (LAI) for the entire canopy

LINE BREAK

12.7 Sensors

Sensor intro - Paul

12.8 Detectors

Paul

12.9 Analog-to-Digital Converters

Paul

12.10 Panchromatic Sensors

Paul

12.11 Multispectral Sensors

Paul

12.12 Hyperspectral Sensors

Paul

12.13 Platforms

Relocated intro text from Claire:

The surface of the Earth is around 510 million square kilometres, or 323 billion hockey rinks, so as you can imagine, capturing even a small percentage of that area is a daunting task. Recent advancements in astrophysics and aerospace research have allowed us to conceive, build, and launch satellites from all over the globe to measure Earth from space. In fact, no fewer than 27,000 objects, including decommissioned platforms and debris, are currently whizzing through Earth’s orbit at eye-watering speeds of 7 kilometres per second or more (NASA, 2021). The image shown below is a visualization of all these objects in Earth’s orbit. If you are curious about where individual satellites are located, check out “Stuff in Space,” a website that visualizes satellite orbits all over the globe using real-time data. Click here to open the webpage on your browser and try clicking some satellites to see their orbit and various parameters such as altitude, velocity, country of origin, and more.

12.14 Terrestrial Systems

Paul

12.15 Aerial systems

Paul

12.16 Satellite Systems

Paul

12.17 Orbital Physics

Paul

12.18 Low Earth Orbit

Paul

12.19 Medium Earth Orbit

Paul

12.20 Polar and Sun-synchronous Orbit

Paul

12.21 Geostationary Orbit

Paul

12.22 Lagrange Points

Paul

12.23 Important Satellite Systems for Environmental Management

12.24 LANDSAT

12.25 RADARSAt

12.26 Advanced Very High Resolution Radiometer (AVHRR)

12.27 Moderate Resolution Spectroradiometer (MODIS)

12.28 ICESat

12.29 Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)

12.30 Defense Meteorology Satellite Program (DMSP)

12.31 Visible Infrared Imaging Radiometer Suite (VIIRS)

12.32 WildfireSat

:::: {.box-content .case-study-content}

12.33 Case Study: Title of Case Study Here

You see textual case study content here

Case Study Title Max of Forty Characters

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent non magna non nunc auctor eleifend. Etiam fringilla fermentum nisl at volutpat. Duis sodales lorem interdum, posuere neque sollicitudin, malesuada nisl. Sed laoreet commodo dui et cursus. Curabitur vel porta mauris. In lacinia ac ex ac varius. Nullam at vulputate ligula. Nunc sed faucibus urna, eget placerat sapien. Fusce ut lorem a ante congue consequat et sed tortor. Integer neque urna, vehicula at aliquam non, luctus non mi. Pellentesque et massa vehicula, cursus erat id, aliquet sapien. Sed rhoncus vehicula risus, vel aliquam eros porta eget. Etiam maximus, massa in pretium semper, est libero feugiat ipsum, quis tempor nunc libero in nibh. Nunc sollicitudin mattis metus ut rutrum. Donec urna purus, sodales id nibh in, ultricies tincidunt nibh. Proin porta accumsan aliquet.

Cras convallis erat ante, ut tristique est tempor elementum. Nam mollis, ipsum at vehicula vestibulum, est magna finibus nisl, et laoreet metus massa et ipsum. Proin eget eros ac odio euismod volutpat et ac diam. Cras viverra ut libero vel pulvinar. Duis nisi magna, sagittis id ligula ac, efficitur commodo eros. Nulla tincidunt id nulla in lobortis. Sed non mi eu mi fermentum cursus. In elit velit, semper sed gravida at, imperdiet sed nibh. Aliquam quis massa malesuada, venenatis nunc sed, malesuada nulla. Vestibulum malesuada purus ut ex ullamcorper, ut blandit lacus lobortis. Curabitur scelerisque velit justo, quis porttitor purus efficitur ut. Phasellus nec arcu vestibulum, consequat ante id, elementum velit. Proin arcu tortor, cursus vitae sem id, congue semper urna. Integer tempus in est eu consequat. Donec sodales, quam vel finibus faucibus, leo ante dictum quam, id tempus ligula dolor quis erat. Curabitur non elementum sem. Mauris placerat fermentum orci non lacinia. Ut imperdiet dui lectus, ac malesuada felis euismod sed. Nulla non volutpat dui, in suscipit turpis.

Case studies should have at least one image or map (no more than 2 total) and the written length should be around 300 words (shown above). Any references to external literature should by hyperlinked with the Digitial Object Identifier (DOI) permanent URL and entered into the bibliography. Avoid linking to external resources without a DOI and permanent URL. Contact Paul or try using the Leaflet package in R if you want to add an interactive web map.

12.34 Summary

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut in dolor nibh. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent et augue scelerisque, consectetur lorem eu, auctor lacus. Fusce metus leo, aliquet at velit eu, aliquam vehicula lacus. Donec libero mauris, pharetra sed tristique eu, gravida ac ex. Phasellus quis lectus lacus. Vivamus gravida eu nibh ac malesuada. Integer in libero pellentesque, tincidunt urna sed, feugiat risus. Sed at viverra magna. Sed sed neque sed purus malesuada auctor quis quis massa.

Reflection Questions

  1. Explain ipsum lorem.
  2. Define ipsum lorem.
  3. What is the role of ispum lorem?
  4. How does ipsum lorem work?

Practice Questions

  1. Given ipsum, solve for lorem.
  2. Draw ipsum lorem.