Categories
iOS Swift

Avoiding subtle mistake when guarding mutable state with DispatchQueue

Last week, I spent quite a bit of time on investigating an issue which sometimes happened, sometimes did not. There was quite a bit of code involved running on multiple threads, so tracking it down was not so simple. No surprise to find that this was a concurrency issue. The issue lied in the implementation of guarding a mutable state with DispatchQueue. The goal of the blog post is to remind us again a pattern which looks nice at first but actually can cause issues along the road.

Let’s have a look at an example where we have a Storage class which holds data in a dictionary where keys are IDs and values are Data instances. There are multiple ways for guarding the mutable state. In the example, we are using a concurrent DispatchQueue. Concurrent queues are not as optimized as serial queues, but the reasoning here is that we store large data blobs and concurrent reading gives us a slight benefit over serial reading. With concurrent queues we must make sure all the reading operations have finished before we mutate the shared state, and therefore we use the barrier flag which tells the queue to wait until all the enqueued tasks are finished.

final class Storage {
private let queue = DispatchQueue(label: "myexample", attributes: .concurrent)
private var _contents = [String: Data]()
private var contents: [String: Data] {
get {
queue.sync { _contents }
}
set {
queue.async(flags: .barrier) { self._contents = newValue }
}
}
func store(_ data: Data, forIdentifier id: String) {
contents[id] = data
}
// …
}
view raw Storage.swift hosted with ❤ by GitHub

The snippet above might look pretty nice at first, since all the logic around synchronization is in one place, and we can use the contents property in other functions without needing to think about using the queue. For validating that it works correctly, we can add a unit test.

func testThreadSafety() throws {
let iterations = 100
let storage = Storage()
DispatchQueue.concurrentPerform(iterations: iterations) { index in
storage.store(Data(), forIdentifier: "\(index)")
}
XCTAssertEqual(storage.numberOfItems, iterations)
}
view raw Test.swift hosted with ❤ by GitHub

The test fails because we actually have a problem in the Storage class. The problem is that contents[id] = data does two operations on the queue: firstly, reading the current state using the property getter and then setting the new modified dictionary with the setter. Let’s walk this through with an example where thread A calls the store function and tries to add a new key “d” and thread B calls the store function at the same time and tries to add a new key “e”. The flow might look something like this:

A calls the getter and gets an instance of the dictionary with keys “a, b, c”. Before the thread A calls the setter, thread B already had a chance to read the dictionary as well and gets the same keys “a, b, c”. Thread A reaches the point where it calls the setter and inserts modified dictionary with keys”a, b, c, d” and just after that the thread B does the same but tries to insert dictionary with keys “a, b, c, e”. When the queue ends processing all the work items, the key “d” is going to be lost, since the thread B managed to read the shared dictionary state before the thread A modified it. The morale of the story is that when modifying a shared state, we must make sure that reading the initial state and setting a new value must be synchronized and can’t happen as separate work items on the synchronizing queue. This happened here, since using the dictionaries subscript first runs the getter and then the setter.

The suggestion how to fix such issues is to use a single queue and making sure that read and write happen within the same work item.

func store(_ data: Data, forIdentifier id: String) {
// Incorrect because read and write happen in separate blocks on the queue
// contents[id] = data
// Correct
queue.async(flags: .barrier) {
self._contents[id] = data
}
}
view raw Fixed.swift hosted with ❤ by GitHub

An alternative approach to this Storage class’ implementation with new concurrency features in mind could be using the new actor type instead. But keep in mind that in that case we need to use await when accessing the storage since actors are part of the structured concurrency in Swift. Using the await keyword in turn requires having async context available, so it might not be straight-forward to adopt.

actor Storage {
private var contents = [String: Data]()
func store(_ data: Data, forIdentifier id: String) {
contents[id] = data
}
var numberOfItems: Int { contents.count }
}
// Example:
// await storage.store(data, forIdentifier: id)
view raw Actor.swift hosted with ❤ by GitHub

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
Swift

AsyncPhoto for displaying large photos in SwiftUI

While working on one of my private projects which deals with showing large photos as small thumbnails in a list, I found myself needing something like AsyncImage but for any kind of data sources. AsyncImage looks pretty great, but sad it is limited to loading images from URL. It has building blocks like providing placeholder and progress views. In my case, I needed something where instead of the URL argument, I would have an async closure which returns image data. This would give me enough flexibility for different cases like loading a photo from a file or even doing what AsyncImage is doing, loading image data from a server. I would love to know why Apple decided to go for a narrow use-case of loading images from URL but not for more generic approach. In addition, I would like to pre-define the target image size which allows me to scale the image to smaller size and therefore saving memory usage which would increase a lot when dealing with large photos. Enough talk, let’s jump in.

struct AsyncPhoto<ID, Content, Progress, Placeholder>: View where ID: Equatable, Content: View, Progress: View, Placeholder: View {
@State private var phase: Phase = .loading
let id: ID
let data: (ID) async -> Data?
let scaledSize: CGSize
@ViewBuilder let content: (Image) -> Content
@ViewBuilder let placeholder: () -> Placeholder
@ViewBuilder let progress: () -> Progress
init(id value: ID = "",
scaledSize: CGSize,
data: @escaping (ID) async -> Data?,
content: @escaping (Image) -> Content,
progress: @escaping () -> Progress = { ProgressView() },
placeholder: @escaping () -> Placeholder = { Color.secondary }) {
// …
}

The AsyncPhoto type is a generic over 4 types: ID, Content, Progress, Placeholder. Last three are SwiftUI views and the ID is equatable. This allows us for notifying the AsyncPhoto when to reload the photo by calling the data closure. Basically the same way as the task(id:priority:_:) is working – if the id changes, work item is run again. Since we expect to deal with large photos, we want to scale images before displaying them. Since the idea is that the view does not change the size while it is loading, or displaying a placeholder, we’ll require to pre-define the scaled size. Scaled size is used for creating a thumbnail image and also setting the AsyncPhoto’s frame view modifier to equal to that size. We use a data closure here for giving a full flexibility on how to provide the large image data.

AsyncImage has a separate type AsyncImagePhase for defining different states of the loading process. Since we need to do the same then, let’s add AsyncPhoto.Phase.

extension AsyncPhoto {
enum Phase {
case success(Image)
case loading
case placeholder
}
}

This allows us to use a switch statement in the view body and defining a local state for keeping track of in which phase we currently are. The view body implementation is pretty simple since we use view builders for content, progress and placeholder states. Since we want to have a constant size here, we use the frame modifier and the task view modifier is the one managing scheduling the reload when id changes.

var body: some View {
VStack {
switch phase {
case .success(let image):
content(image)
case .loading:
progress()
case .placeholder:
placeholder()
}
}
.frame(width: scaledSize.width, height: scaledSize.height)
.task(id: id, {
await self.load()
})
}

The load function is updating the phase state and triggering the heavy load of scaling the image.

@MainActor func load() async {
phase = .loading
if let image = await prepareScaledImage() {
phase = .success(image)
}
else {
phase = .placeholder
}
}

The prepareScaledImage is another function which wraps the work of fetching the image data and scaling it.

private func prepareScaledImage() async -> Image? {
guard let photoData = await data(id) else { return nil }
guard let originalImage = UIImage(data: photoData) else { return nil }
let scaledImage = await originalImage.scaled(toFill: scaledSize)
guard let finalImage = await scaledImage.byPreparingForDisplay() else { return nil }
return Image(uiImage: finalImage)
}

I am using an UIImage extension for scaling the image data. The implementation goes like this:

extension UIImage {
func scaled(toFill targetSize: CGSize) async -> UIImage {
let scaler = UIGraphicsImageRenderer(size: targetSize)
let finalImage = scaler.image { context in
let drawRect = size.drawRect(toFill: targetSize)
draw(in: drawRect)
}
return await finalImage.byPreparingForDisplay() ?? finalImage
}
}
private extension CGSize {
func drawRect(toFill targetSize: CGSize) -> CGRect {
let aspectWidth = targetSize.width / width
let aspectHeight = targetSize.height / height
let scale = max(aspectWidth, aspectHeight)
let drawRect = CGRect(x: (targetSize.width – width * scale) / 2.0,
y: (targetSize.height – height * scale) / 2.0,
width: width * scale,
height: height * scale)
return drawRect.integral
}
}

Here is an example of using AsyncPhoto from my test app, where I replaced photos with generated image data.

// Example of returning large image with a constant color for simulating loading a photo.
AsyncPhoto(id: selectedColor,
scaledSize: CGSize(width: 48, height: 48),
data: { selectedColor in
guard let selectedColor else { return nil }
return await Task.detached {
UIImage.filled(size: CGSize(width: 5000, height: 5000),
fillColor: selectedColor).pngData()
}.value
},
content: { image in
image.clipShape(Circle())
},
placeholder: {
Image(systemName: "person.crop.circle")
.resizable()
})

SwiftUIAsyncPhotoExample (GitHub, Xcode 15.0.1)

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

Async-await and completion handler compatibility in Swift

The prominent way for writing async code before async-await arrived to Swift was using completion handlers. We pass in a completion handler which then gets called at some later time. When working with larger codebases, it is not straight-forward to convert existing code to use newer techniques like async-await. Often we make these changes over time, which means that in case of wrapping completion handler based code, we would have the same function both in form of completion handler and async await. Fortunately, it is easy to wrap existing completion handler based code and to provide an async-await version. The withCheckedThrowingContinuation() function is exactly for that use-case. It provides an object which will receive the output of our completion handler based code – most of the time a value or an error. If we use Result type in completion handlers, then it is only 3 lines of code to wrap it, thanks to the fact that the continuation has a dedicated function for resuming result types.

final class ImageFetcher {
func fetchImages(for identifiers: [String], completionHandler: @escaping (Result<[String: UIImage], Error>) -> Void) {
// …
}
}
extension ImageFetcher {
func fetchImages(for identifiers: [String]) async throws -> [String: UIImage] {
try await withCheckedThrowingContinuation { continuation in
fetchImages(for: identifiers) { result in
continuation.resume(with: result)
}
}
}
}

Great, but what if we add new code to an existing code base relying heavily on completion handler based code? Can we start with an async function and wrap that as well? Sure. In the example below, we have some sort of DataFetcher which has an async function. If we needed to call this function from a completion handler based code, we can add a wrapping function pretty easily. Later, if we have fully converted to async-await, it can be discarded easily. So how do we do it? We start off the wrapping code by creating a Task which starts running automatically and which also provides an async context for calling async functions. This means that we can call the async function with try await and catching the error if it throws. Then it is just a matter of calling the completion handler. Depends on the use-case and how this code is meant to be used, but we should always think about which thread should be calling the completion handler. In the example, we always switch to the main thread because the Task’s closure is running on a global actor (in other words, on a background thread).

final class DataFetcher {
func fetchData(for identifiers: [String]) async throws -> [String: Data] {
// …
}
}
extension DataFetcher {
func fetchData(for identifiers: [String], completionHandler: @escaping (Result<[String: Data], Error>) -> Void) {
Task {
do {
let data = try await fetchData(for: identifiers)
await MainActor.run {
completionHandler(.success(data))
}
}
catch {
await MainActor.run {
completionHandler(.failure(error))
}
}
}
}
}

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

TaskGroup error handling in Swift

Task groups in Swift are used for running n-number of child tasks and letting it handle things like cancellation or different priorities. Task groups are created with either withThrowingTaskGroup(of:returning:body:) or withTaskGroup(of:returning:body:). The latter is for cases when errors are not thrown. In this blog post, we will observe two cases of generating Data objects using a task group. In the first case, we want to stop the group as soon as an error has occurred and discard all the remaining work. The other case looks at ignoring any errors in child tasks and collect just collecting Data objects for tasks which were successful.

The example we are going to use simulates creating image data for multiple identifiers and then returning an array of Data objects. The actual image creating and processing is simulated with Task’s sleep function. Since task groups coordinate cancellation to all the child tasks, then the processor implementation also calls Task.checkCancellation() to react to cancellation and stopping as soon as possible for avoiding unnecessary work.

struct ImageProcessor {
static func process(identifier: Int) async throws -> Data {
// Read image data
try await Task.sleep(nanoseconds: UInt64(identifier) * UInt64(1e8))
try Task.checkCancellation()
// Simulate processing the data and transforming it
try await Task.sleep(nanoseconds: UInt64(1e8))
try Task.checkCancellation()
if identifier != 2 {
print("Success: \(identifier)")
return Data()
}
else {
print("Failing: \(identifier)")
throw ProcessingError.invalidData
}
}
enum ProcessingError: Error {
case invalidData
}
}

Now we have the processor created. Let’s see an example of calling this function from a task group. As soon as we detect an error in one of the child tasks, we would like to stop processing and return an error from the task group.

let imageDatas = try await withThrowingTaskGroup(of: Data.self, returning: [Data].self) { group in
imageIdentifiers.forEach { imageIdentifier in
group.addTask {
return try await ImageProcessor.process(identifier: imageIdentifier)
}
}
var results = [Data]()
for try await imageData in group {
results.append(imageData)
}
return results
}
view raw Case1.swift hosted with ❤ by GitHub

We loop over the imageIdentifiers array and create a child task for each of these. When child tasks are created and running, we wait for child tasks to finish by looping over the group and waiting each of the child task. If the child task throws an error, then in the for loop we re-throw the error which makes the task group to cancel all the remaining child tasks and then return the error to the caller. Since we loop over each of the task and wait until it finishes, then the group will throw an error of the first added failing task. Also, just to remind that cancellation needs to be handled explicitly by the child task’s implementation by calling Task.checkCancellation().

Great, but what if we would like to ignore errors in child tasks and just collect Data objects of all the successful tasks. This could be implemented with withTaskGroup function by specifying the child task’s return type optional and handling the error within the child task’s closure. If error is thrown, return nil, and later when looping over child tasks, ignore nil values with AsyncSequence’s compactMap().

let imageDatas = await withTaskGroup(of: Data?.self, returning: [Data].self) { group in
imageIdentifiers.forEach { imageIdentifier in
group.addTask {
do {
return try await ImageProcessor.process(identifier: imageIdentifier)
} catch {
return nil
}
}
}
var results = [Data]()
for await imageData in group.compactMap({ $0 }) {
results.append(imageData)
}
return results
}
view raw Case2.swift hosted with ❤ by GitHub

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
Combine iOS Swift

Async-await support for Combine’s sink and map

Async-await in Swift is getting more popular as time goes by, but Combine publishers do not have built-in support for it currently. In this blog post, we’ll see how to expand some of the existing publishers.

Async-await supported sink

One case where I have encountered this is when I have wanted to call an async function in sink. Although I could wrap the call with Task within the sink subscriber, it gets unnecessary long if I need to do it in many places. Instead, we can just do it once and add an async-await supported sink subscriber.

extension Publisher where Self.Failure == Never {
func sink(receiveValue: @escaping ((Self.Output) async -> Void)) -> AnyCancellable {
sink { value in
Task {
await receiveValue(value)
}
}
}
}
// Allows writing sink without Task
$imageURL
.compactMap({ $0 })
.sink { [weak self] url in
await self?.processImageURL(url)
}
.store(in: &cancellables)
view raw ViewModel.swift hosted with ❤ by GitHub

Async-await supported map

The Combine framework has map and tryMap for supporting throwing functions, but is lacking something like tryAwaitMap for async throwing functions. Combine has a publisher named Future which supports performing asynchronous work and publishing a value. We can use this to wrap a Task with asynchronous work. Another publisher in Combine is flatMap what is used for turning one kind of publisher to a new kind of publisher. Therefore, we can combine these to turn a downstream publisher to a new publisher of type Future. The first tryAwaitMap below is for a case where the downstream publisher emits errors, and the second one is for the case where the downstream does not emit errors. We need to handle these separately since we need to tell Combine how error types are handled (non-throwing publisher has failure type set to Never).

extension Publisher {
public func tryAwaitMap<T>(_ transform: @escaping (Self.Output) async throws -> T) -> Publishers.FlatMap<Future<T, Error>, Self> {
flatMap { value in
Future { promise in
Task {
do {
let result = try await transform(value)
promise(.success(result))
}
catch {
promise(.failure(error))
}
}
}
}
}
public func tryAwaitMap<T>(_ transform: @escaping (Self.Output) async throws -> T) -> Publishers.FlatMap<Future<T, Error>, Publishers.SetFailureType<Self, Error>> {
// The same implementation but the returned publisher transforms failures with SetFailureType.
}
}
// Case 1: throwing downstream publisher
$imageURL
.tryMap({ try Self.validateURL($0) })
.tryAwaitMap({ try await ImageProcessor.process($0) })
.map({ Image(uiImage: $0) })
.sink(receiveCompletion: { print("completion: \($0)") },
receiveValue: { print($0) })
.store(in: &cancellables)
// Case 2: non-throwing downstream publisher
$imageURL
.compactMap({ $0 })
.tryAwaitMap({ try await ImageProcessor.process($0) })
.map({ Image(uiImage: $0) })
.sink(receiveCompletion: { print("completion: \($0)") },
receiveValue: { print($0) })
.store(in: &cancellables)
view raw ViewModel.swift hosted with ❤ by GitHub

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift Swift Package

Handling never finishing async functions in Swift package tests

Why does my CI never finish and post a message to the merge request? Logged in to CI and oh, my merge job had been running for 23 minutes already, although typically it finishes in 4 minutes. What was going on? Nothing else than on unit-test marked with async was still waiting for an async function to finish. So what can we to avoid this? Let’s first create a Swift package which will be demonstrating the issue.

struct ImageLoader {
func loadImage(for identifier: String) async throws -> UIImage {
// Delay for 100 seconds
try await Task.sleep(nanoseconds: UInt64(100 * 1e9))
return UIImage()
}
}

And a simple unit-test for the successful case.

final class ImageLoaderTests: XCTestCase {
func testLoadingImageSuccessfully() async throws {
let imageLoader = ImageLoader()
_ = try await imageLoader.loadImage(for: "identifier")
}
}

This test passes after 100 seconds, but clearly, we do not want to wait so long if something takes way too much time. Instead, we want to fail the test when it is still running after 5 seconds.

Exploring XCTestCase executionTimeAllowance

XCTestCase has a property called executionTimeAllowance what we can set. Ideally I would like to write something like executionTimeAllowance = 5 and Xcode would fail the test with a timeout failure after 5 seconds.

override func setUpWithError() throws {
executionTimeAllowance = 5 // gets rounded up to 60
}

But if we read the documentation, then it mentions that the value set to this property is rounded up to the nearest minute value. In addition, this value is not used if you do not enable it explicitly: “To use this setting, enable timeouts in your test plan or set the -test-timeouts-enabled option to YES when using xcodebuild.”. If we are working on a Swift package, then I am actually not sure how to set it in the Package.swift so that it gets set when running the test from Xcode or from a command line.

Custom test execution with XCTestExpectation

One way to avoid never finishing tests is to use good old XCTestExpectation. We can set up a method which runs the async work and then waits for the test expectation with a timeout. If a timeout occurs, the test fails. If the async function throws an error, we can capture it, fail the test with XCTFail.

final class ImageLoaderTests: XCTestCase {
func testLoadingImageSuccessfully() {
execute(withTimeout: 5) {
let imageLoader = ImageLoader()
_ = try await imageLoader.loadImage(for: "identifier")
}
}
}
extension XCTestCase {
func execute(withTimeout timeout: TimeInterval, file: StaticString = #filePath, line: UInt = #line, workItem: @escaping () async throws -> Void) {
let expectation = expectation(description: "wait for async function")
var workItemError: Error?
let captureError = { workItemError = $0 }
let task = Task {
do {
try await workItem()
}
catch {
captureError(error)
}
expectation.fulfill()
}
waitForExpectations(timeout: timeout) { _ in
if let error = workItemError {
XCTFail("\(error)", file: file, line: line)
}
task.cancel()
}
}
}

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift SwiftUI

Wrapping delegates for @MainActor consumers in Swift

Sometimes we need to handle delegates in a class which has the @MainActor annotation. Often it can be a view model where we expect that code runs on the main thread. Therefore, view models have the @MainActor annotation, since we want that their methods run on the main thread when interacting with other async code. In an example below, we’ll be looking into integrating a delegate based ImageBatchLoader class which calls delegate methods on a background thread. The end goal is to handle the delegate in a view model and making sure it runs on the main thread.

final class ImageBatchLoader {
weak var delegate: ImageBatchLoaderDelegate?
init(delegate: ImageBatchLoaderDelegate) {
self.delegate = delegate
}
func start() {
DispatchQueue.global().async {
self.delegate?.imageLoader(self, didLoadBatch: [UIImage()])
}
}
}
protocol ImageBatchLoaderDelegate: AnyObject {
func imageLoader(_ imageLoader: ImageBatchLoader, didLoadBatch batch: [UIImage])
}
An example ImageBatchLoader with stubbed out start method.

This is an example of a class which uses delegates and calls delegate methods from background threads. If we have a view model with @MainActor annotation, then we just can’t conform to that delegate since the delegate does not use any async-await support. Xcode would show a warning saying that the protocol is non-isolated. A protocol would be isolated if it would have, for example, @MainActor annotation as well for that protocol. Let’s say this is not possible and it is a third party code instead.

The solution I have personally settled with is creating a wrapper class which conforms to that delegate and then uses main thread bound closures to notify when any of the delegate callbacks happen.

final class ImageBatchLoaderHandler: ImageBatchLoaderDelegate {
var didLoadBatch: @MainActor ([UIImage]) -> Void = { _ in }
func imageLoader(_ imageLoader: ImageBatchLoader, didLoadBatch batch: [UIImage]) {
print("isMainThread", Thread.isMainThread, #function)
Task {
await didLoadBatch(batch)
}
}
}

Here we can see a class which conforms to the ImageBatchLoaderDelegate and provides a didLoadBatch closure which has an @MainActor annotation. Since we use @MainActor and tap into the async-await concurrency, then we need an async context as well, which the Task provides.

@MainActor final class ViewModel: ObservableObject {
private let imageLoader: ImageBatchLoader
private let imageLoaderHandler: ImageBatchLoaderHandler
init() {
imageLoaderHandler = ImageBatchLoaderHandler()
imageLoader = ImageBatchLoader(delegate: imageLoaderHandler)
imageLoaderHandler.didLoadBatch = handleBatch
imageLoader.start()
}
func handleBatch(_ batch: [UIImage]) {
print("isMainThread", Thread.isMainThread, #function)
// redacted
}
}
view raw ViewModel.swift hosted with ❤ by GitHub

Finally we have hooked up the image loader, its handler and also forwarding the didLoadBatch to a separate function which is part of the view model. With a little bit of code, we achieved what we wanted: listening to delegate callbacks and forwarding them to the view model on the main thread. If we ran the code we would see that the delegate callback runs on a background thread but the view model method runs on the main thread.

isMainThread false imageLoader(_:didLoadBatch:)
isMainThread true handleBatch(_:)

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

View modifier for preparing view data in SwiftUI

SwiftUI has view modifiers like onAppear() and onDisappear() for letting the view know when it is going to be displayed and when it is removed from the screen. In addition, there is a task() view modifier for running async functions. Something to keep in mind with onAppear() and task() is that the closure passed into the view modifier can be called multiple times when the view hierarchy changes. For example, when we have a TabView then the view receives onAppear() callback and also the task part of the task() is triggered each time when the tab presenting it is activated. In this blog post, we are looking into a case where we have some code which we only want to run once during the view’s lifetime. One of the use-cases is preparing content in view models. Let’s take a look at these cases where one view and its view model has synchronous prepare function and the other one has async prepare function (e.g. starting a network requests in the prepare function).

extension ContentView {
@MainActor final class ViewModel: ObservableObject {
func prepare() {
//
}
}
}
extension OtherView {
@MainActor final class ViewModel: ObservableObject {
func prepare() async {
//
}
}
}
view raw ViewModel.swift hosted with ❤ by GitHub

SwiftUI uses view modifiers for configuring views. This is what we want to do here as well. We can create a new view modifier by conforming to the ViewModifier protocol and implementing the body function, where we add additional functionality to the existing view. The view modifier uses internal state for tracking if the closure was called already or not in the onAppear(). SwiftUI ensures that onAppear is called before the view is rendered. Below is the view modifier’s implementation with a view extension which creates it and finally an example view and its view model using it.

struct PrepareViewData: ViewModifier {
@State var hasPrepared = false
let action: (() -> Void)
func body(content: Content) -> some View {
content
.onAppear {
if !hasPrepared {
action()
hasPrepared = true
}
}
}
}
extension View {
func prepare(perform action: @escaping () -> Void) -> some View {
modifier(PrepareViewData(action: action))
}
}
struct ContentView: View {
@StateObject var viewModel = ViewModel()
var body: some View {
VStack {
// redacted
}
.prepare {
viewModel.prepare()
}
}
}

If we have a view model which needs to do async calls in the prepare() function then we need a slightly different view modifier. Since async functions can run a long time, then we should also handle cancellation. If the view disappears, we should cancel the task if it is still running and restart it next time when the view is shown. Cancellation is implemented by keeping a reference to the task and calling cancel() on the task in the onDisappear(). For making the cancellation working properly, we need to make sure the async function actually implements cancellation by using, for example, Task.checkCancellation() within its implementation. Other than that, the view modifier implementation looks quite similar to the one above.

struct PrepareAsyncViewData: ViewModifier {
@State var hasPrepared = false
@State var task: Task<Void, Never>?
let action: (() async -> Void)
func body(content: Content) -> some View {
content
.onAppear {
guard !hasPrepared else { return }
guard task == nil else { return }
task = Task {
await action()
hasPrepared = true
}
}
.onDisappear {
task?.cancel()
task = nil
}
}
}
extension View {
func prepare(perform action: @escaping () async -> Void) -> some View {
modifier(PrepareAsyncViewData(action: action))
}
}
struct OtherView: View {
@StateObject var viewModel = ViewModel()
var body: some View {
VStack {
// redacted
}
.prepare {
await viewModel.prepare()
}
}
}

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

Running tasks in parallel with async-await in Swift part 2

In the previous blog post Running tasks in parallel with async-await in Swift we looked at using TaskGroup for running multiple tasks in parallel. The benefit of TaskGroup is that we can create n number of tasks easily by, for example, iterating an array and creating a task for each of the element. Moreover, groups provide control over cancellation and prioritization. Another way of running tasks in parallel is using the async let syntax. This is useful when we have a fixed number of tasks what we want to run in parallel. Let’s take a look at an example where we need to fetch two images and later merge two images into one. We can imagine it is some sort of image editing app.

@MainActor final class ViewModel: ObservableObject {
func prepare() {
Task {
do {
async let wallpaperData = service.fetchImageData(id: "blue_wallpaper")
async let overlayData = service.fetchImageData(id: "gradient_overlay")
self.backgroundImage = try await ImageProcessor.merge(background: wallpaperData, overlay: overlayData)
} catch {
// TODO: present/handle error
}
}
}
}
struct ImageProcessor {
static func merge(background: Data, overlay: Data) throws -> UIImage {
// …
}
}
view raw ViewModel.swift hosted with ❤ by GitHub

In the snippet above, we have a prepare function which is triggered by some user interface event, let’s say a button tap. What happens first is that we create a Task which captures the async work we want to do and starts running immediately. The task also catches any errors, but we haven’t filled in what to do with the error. By using the async let syntax, we can start fetching images in parallel. Note that if we would not use async keyword before the let here we would need to add await before the fetch calls and in that case images are fetched serially. Nice thing about the async let is that function arguments defined by the ImageProcessor do not need any special treatment. Swift compiler just requires to call the next throwing non-async function with try await to wait for both fetch tasks before they are passed into the merge function. Therefore, it is easy to use async let since it does not have any other implication to the following code except just requiring to use await keyword.

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

Running tasks in parallel with async-await in Swift

Async-await in Swift supports scheduling and running multiple tasks in parallel. One of the benefits is that we can schedule all the async operations at once without worrying about any thread explosions. Thread explosion could have happened with DispatchQueue APIs if our queue is concurrently performing, and we would add a lot of work items to it. The structured concurrency on the other hand makes sure this does not happen by only running a limit amount of tasks at the same time.

Let’s take an example where we have a list of filenames, and we would like to load images for these filenames. Loading is async and might also throw an error as well. Here is an example how to use the TaskGroup:

@MainActor final class ViewModel: ObservableObject {
let imageNames: [String]
init(imageNames: [String]) {
self.imageNames = imageNames
}
func load() {
Task {
let store = ImageStore()
let images = try await withThrowingTaskGroup(of: UIImage.self, body: { group in
imageNames.forEach { imageName in
group.addTask {
try await store.loadImage(named: imageName)
}
}
return try await group.reduce(into: [UIImage](), { $0.append($1) })
})
self.images = images
}
}
@Published var images = [UIImage]()
}
struct ImageStore {
func loadImage(named name: String) async throws -> UIImage {
return …
}
}
view raw ViewModel.swift hosted with ❤ by GitHub

In our view model, we have a load function which creates a task on the main actor. On the main actor because the view model has a @MainActor annotation. The Swift runtime makes sure that all the functions and properties in the view model always run on the main thread. This also means that the line let store runs on the main thread as well because the created task belongs to the main actor. If a task belongs to an actor, it will run on the actor’s executor. Moreover, all the code except the child task’s closure containing loadImage runs on the main thread. This is because our ImageStore does not use any actors. If ImageStore had @MainActor annotation, then everything would run on the main thread and using task group would not make any sense. If we remove the @MainActor from the view model, then we can see that let store starts running on a background thread along with all the other code in the load function. That is a case of unstructured concurrency. Therefore, it is important to think about if code has tied to any actors or not. Creating a task does not mean it will run on a background thread.

But going back to the TaskGroup. Task groups are created with withThrowingTaskGroup or when dealing with non-throwing tasks then withTaskGroup function. This function creates a task group where we can add tasks which run independently. For getting results back from the group, we can use AsyncSequence protocol functions. In this simple example, we just want to collect results and return them. Async sequence has reduce function which we can use exactly for that.

To summarize what we achieved in the code snippet above. We had a list of filenames which we transformed into a list of UIImages by running the transformation concurrently using a task group. In addition, we used MainActor for making sure UI updates always happen on the main thread.

If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.